diff --git a/CHANGELOG.md b/CHANGELOG.md index 266fafbfbc..c35bd5ca66 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,4 +1,186 @@ -# [2022-12-09] (Chart Release ) +# [2023-01-12] (Chart Release 4.30.0) + +## Release notes + + +* This realease migrates data from `galley.member_client` to `galley.mls_group_member_client`. When upgrading wire-server no manual steps are required. (#2859) + +* Upgrade webapp version to 2022-12-19-production.0-v0.31.9-0-6b2f2bf (#2302) + + +## API changes + + +* - The endpoints `POST /conversations/list` and `GET /conversations` have been removed. Use `POST /conversations/list-ids` followed by `POST /conversations/list` instead. + - The endpoint `PUT /conversations/:id/access` has been removed. Use its qualified counterpart instead. + - The field `access_role_v2` in the `Conversation` type, in the request body of `POST /conversations`, and in the request body of `PUT /conversations/:domain/:id/access` has been removed. Its content is now contained in the `access_role` field instead. It replaces the legacy access role, previously contained in the `access_role` field. + - Clients implementing the V3 API must be prepared to handle a change in the format of the conversation.access_update event. Namely, the field access_role_v2 has become optional. When missing, its value is to be found in the field access_role. (#2841) + +* Added a domain parameter to the typing indicator status update API (#2892) + +* Support MLS self-conversations via a new endpoint `GET /conversations/mls-self`. This removes the `PUT` counterpart introduced in #2730 (#2839) + +* List the MLS self-conversation automatically without needing to call `GET /conversations/mls-self` first (#2856) + +* Fail early in galley when the MLS removal key is not configured (#2899) + +* Introduce a flag in brig to enable MLS explicitly. When this flag is set to false or absent, MLS functionality is completely disabled and all MLS endpoints fail immediately. (#2913) + +* Conversation events may have a "subconv" field for events that originate in a MLS subconversation (#2933) + +* `GET /system/settings/unauthorized` returns a curated set of system settings from brig. The endpoint is reachable without authentication/authorization. It's meant to be used by apps to adjust their behavior (e.g. to show a registration dialog if registrations are enabled on the backend.) Currently, only the `setRestrictUserCreation` flag is exported. Other options may be added in future (in consultation with the security department.) (#2903) + + +## Features + + +* The coturn Helm chart now has a `.tls.ciphers` option to allow setting + the cipher list for TLS connections, when TLS is enabled. By default, + this option is set to a cipher list which is compliant with [BSI + TR-02102-2](https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/Publications/TechGuidelines/TG02102/BSI-TR-02102-2.pdf). (#2924) + +* **Nginz helm chart**: The list of upstreams is split into `nginx_conf.upstreams` and + `nginx_conf.extra_upstreams`. Extra upstreams are disabled by default. They can + be enabled by adding their name (entry's key) to + `nginx_conf.enabled_extra_upstreams`. `nginx_conf.ignored_upstreams` is only + applied to upstreams from `nginx_conf.upstreams`. In the default configuration + of `nginz` extra upstreams are `ibis`, `galeb`, `calling-test` and `proxy`. If one + of those is deployed, its name has be be added to + `nginx_conf.enabled_extra_upstreams` (otherwise, it won't be reachable). Unless + `nginx_conf.upstreams` hasn't been changed manually (overriding its default), + this should be the only needed migration step. (#2849) + +* A team member's role can now be provisioned via SCIM (#2851, #2855) + +* Team search endpoint now supports pagination (#2898, #2895) + +* Introduce optional disabledAPIVersions configuration setting (#2951) + +* Add more logs to SMTP mail sending. Ensure that logs are written before the application fails due to SMTP misconfiguration. (#2818) + +* Added typing indicator status propagation to federated environments (#2892) + +* Allow vhost style addressing for S3 as path style is not supported for newer buckets. + + More info: https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/ (#2955) + + +## Bug fixes and other updates + + +* Fix typo for Servicemonitor enable var in default values for helm charts. (#2896) + +* The parser for the AWS/SNS error message to explain that an endpoint is already in use was incorrect. This lead to an "invalid token" error when registering push tokens for multiple user accounts (user ids) instead of updating the SNS endpoint with an additional user id. (#2921) + +* Avoid client deletion edge case condition which can lead to inconsistent data between brig and galley's clients tables. (#2830) + +* Conversations inside events are now serialised using the format of API V2 (#2971) + +* Do not throw 500 when listing conversations and MLS is not configured (#2893) + +* Do not list MLS self-conversation in client API v1 and v2 if it exists (#2872) + +* Limit 2FA code retries to 3 attempts (#2960) + +* Fix bug in MLS user removal from conversation: the list of removed clients has to be compared with those in the conversation, not the list of *all* clients of that user (#2817) + +* Due to `sftd` changing how configuration is handled for "multi-SFT" calling (starting with version 3.1.10), new options have been added to the `sftd` Helm chart for compatibility with these newer versions. (#2886) + +* For sftd/coturn/restund, fixed a bug in external ip address lookup, in case Kubernetes Node Name doesn't equal hostname. (#2837) + +* Requesting a new token with the client_id now works correctly when the old token is part of the request (#2860) + + +## Documentation + + +* Add extra section to the deeplink docs to explain the socks proxy support while login. (#2885) + +* Describe the auth cookie throttling mechanism. And overhaul the description of auth cookies in general. (#2941) + +* PR guidelines docs are updated with correct helm configuration syntax (#2889) + + +## Internal changes + + +* Log AWS / SNS invalid token responses. This is helpful for native push notification debugging purposes. (#2908) + +* Add tests for invitation urls in team invitation responses. These depend on the settings of galley. (#2797) + +* brig: Allow multiple threads to run simultaneously (#2972) + +* Remove support for compiling local docker images with buildah. Nix is used to build docker images these days (#2822) + +* Nix-created docker images: add some debugging tools in the containers, and add 'make build-image-' for convenience (#2829) + +* Added typeclasses to track uses of federated calls across the codebase. (#2940) + +* Split galley API routes and handler definitions into several modules (#2820) + +* Default intraListing to true. This means that the list of clients, so far saved in both brig's and galley's databases, will still be written to both, but only read from brig's database. This avoids cases where these two tables go out of sync. Brig becomes the source of truth for clients. In the future, if this holds, code and data for galley's clients table can be removed. (#2847) + +* Introduce the `MakesFederatedCall` Servant combinator (#2950) + +* Bump nixpkgs to latest unstable. Stop using forked nixpkgs. (#2828) + +* Optimize memory usage while creating large conversations (#2970) + +* Reduce Polysemy-induced high memory requirements (#2947) + +* Brig calling API is now migrated to servant (#2815) + +* Fixed flaky feature TTL integration test (#2823) + +* Brig teams API is now migrated to servant (#2824) + +* Add 'inconsistencies' tool to check for, and repair certain kinds of data inconsistencies across different cassandra tables. (#2840) + +* Backoffice Swagger 2.x docs is exposed on `/` and the old Swagger has been removed. Backoffice helm chart only runs stern without an extra nginx. (#2846) + +* Give proxy service a servant routing table for swagger (not for replacing wai-route; see comments in source code) (#2848) + +* Stern API endpoint `GET ejpd-info` has now the correct HTTP method (#2850) + +* External commits: add additional checks (#2852) + +* Golden tests for conversation and feature config event schemas (#2861) + +* Add startup probe to brig helm chart. (#2878) + +* Track federated calls in types across the codebase. (#2940) + +* Update nix pins to point at polysemy-1.8.0.0 (#2949) + +* Add MakesFederatedCall combinators to Galley (#2957) + +* Fix `make clean`; allow new data constructors in `ToSchema Version` instance (#2965) + +* Refactor and simplify MLS message handling logic (#2844) + +* Remove cassandra queries to the user_keys_hash table, as they are never read anymore since 'onboarding' / auto-connect was removed in https://github.com/wireapp/wire-server/pull/1005 (#2902) + +* Replay external backend proposals after forwarding external commits. + One column added to Galley's mls_proposal_refs. (#2842) + +* Remove an unused effect for remote conversation listing (#2954) + +* Introduce types for subconversations (#2925) + +* Use treefmt to ensure consistent formatting of .nix files, use for shellcheck too (#2831) + + +## Federation changes + + +* Honour MLS flag in brig's federation API (#2946) + +* Split the Proteus and MLS message sending requests into separate types. The MLS request now supports MLS subconversations. This is a federation API breaking change. (#2925) + +* Injects federated calls into the `x-wire-makes-federated-calls-to` extension of the swagger Operations (#2950) + + +# [2022-12-09] (Chart Release 4.29.0) ## Bug fixes and other updates diff --git a/Makefile b/Makefile index 067ec68296..df625369b5 100644 --- a/Makefile +++ b/Makefile @@ -265,8 +265,11 @@ ifeq ($(package), all) ./dist/galley-schema --keyspace galley_test --replication-factor 1 ./dist/gundeck-schema --keyspace gundeck_test --replication-factor 1 ./dist/spar-schema --keyspace spar_test --replication-factor 1 -else +# How this check works: https://stackoverflow.com/a/9802777 +else ifeq ($(package), $(filter $(package),brig galley gundeck spar)) $(EXE_SCHEMA) --keyspace $(package)_test --replication-factor 1 +else + @echo No schema migrations for $(package) endif diff --git a/cabal.project b/cabal.project index c9f7daf5a0..b1c4c70161 100644 --- a/cabal.project +++ b/cabal.project @@ -42,13 +42,14 @@ packages: , tools/api-simulations/ , tools/db/assets/ , tools/db/auto-whitelist/ - , tools/db/migrate-sso-feature-flag/ - , tools/db/service-backfill/ , tools/db/billing-team-member-backfill/ , tools/db/find-undead/ + , tools/db/inconsistencies/ + , tools/db/migrate-sso-feature-flag/ , tools/db/move-team/ , tools/db/repair-handles/ - , tools/db/inconsistencies/ + , tools/db/service-backfill/ + , tools/fedcalls/ , tools/rex/ , tools/stern/ diff --git a/changelog.d/0-release-notes/member_clients_migration b/changelog.d/0-release-notes/member_clients_migration deleted file mode 100644 index 56b91f7569..0000000000 --- a/changelog.d/0-release-notes/member_clients_migration +++ /dev/null @@ -1 +0,0 @@ -This realease migrates data from `galley.member_client` to `galley.mls_group_member_client`. When upgrading wire-server no manual steps are required. diff --git a/changelog.d/0-release-notes/webapp-upgrade b/changelog.d/0-release-notes/webapp-upgrade deleted file mode 100644 index 55ff460f58..0000000000 --- a/changelog.d/0-release-notes/webapp-upgrade +++ /dev/null @@ -1 +0,0 @@ -Upgrade webapp version to 2022-12-19-production.0-v0.31.9-0-6b2f2bf diff --git a/changelog.d/1-api-changes/access-role-v3 b/changelog.d/1-api-changes/access-role-v3 deleted file mode 100644 index 9f1c57e824..0000000000 --- a/changelog.d/1-api-changes/access-role-v3 +++ /dev/null @@ -1,4 +0,0 @@ -- The endpoints `POST /conversations/list` and `GET /conversations` have been removed. Use `POST /conversations/list-ids` followed by `POST /conversations/list` instead. -- The endpoint `PUT /conversations/:id/access` has been removed. Use its qualified counterpart instead. -- The field `access_role_v2` in the `Conversation` type, in the request body of `POST /conversations`, and in the request body of `PUT /conversations/:domain/:id/access` has been removed. Its content is now contained in the `access_role` field instead. It replaces the legacy access role, previously contained in the `access_role` field. -- Clients implementing the V3 API must be prepared to handle a change in the format of the conversation.access_update event. Namely, the field access_role_v2 has become optional. When missing, its value is to be found in the field access_role. diff --git a/changelog.d/1-api-changes/added-domain-to-typing-indicator-api b/changelog.d/1-api-changes/added-domain-to-typing-indicator-api deleted file mode 100644 index cf431c45fb..0000000000 --- a/changelog.d/1-api-changes/added-domain-to-typing-indicator-api +++ /dev/null @@ -1 +0,0 @@ -Added a domain parameter to the typing indicator status update API diff --git a/changelog.d/1-api-changes/get-mls-self-conversation b/changelog.d/1-api-changes/get-mls-self-conversation deleted file mode 100644 index 27ea693dcd..0000000000 --- a/changelog.d/1-api-changes/get-mls-self-conversation +++ /dev/null @@ -1 +0,0 @@ -Support MLS self-conversations via a new endpoint `GET /conversations/mls-self`. This removes the `PUT` counterpart introduced in #2730 diff --git a/changelog.d/1-api-changes/list-mls-self-conversation-automatically b/changelog.d/1-api-changes/list-mls-self-conversation-automatically deleted file mode 100644 index cc36eb2bb4..0000000000 --- a/changelog.d/1-api-changes/list-mls-self-conversation-automatically +++ /dev/null @@ -1 +0,0 @@ -List the MLS self-conversation automatically without needing to call `GET /conversations/mls-self` first diff --git a/changelog.d/1-api-changes/mls-enabled-galley b/changelog.d/1-api-changes/mls-enabled-galley deleted file mode 100644 index e69819275b..0000000000 --- a/changelog.d/1-api-changes/mls-enabled-galley +++ /dev/null @@ -1 +0,0 @@ -Fail early in galley when the MLS removal key is not configured diff --git a/changelog.d/1-api-changes/mls-flag-galley b/changelog.d/1-api-changes/mls-flag-galley deleted file mode 100644 index 6140f13a00..0000000000 --- a/changelog.d/1-api-changes/mls-flag-galley +++ /dev/null @@ -1 +0,0 @@ -Introduce a flag in brig to enable MLS explicitly. When this flag is set to false or absent, MLS functionality is completely disabled and all MLS endpoints fail immediately. diff --git a/changelog.d/1-api-changes/subconv-field b/changelog.d/1-api-changes/subconv-field deleted file mode 100644 index c716e832a3..0000000000 --- a/changelog.d/1-api-changes/subconv-field +++ /dev/null @@ -1 +0,0 @@ -Conversation events may have a "subconv" field for events that originate in a MLS subconversation diff --git a/changelog.d/1-api-changes/system-settings-endpoint b/changelog.d/1-api-changes/system-settings-endpoint deleted file mode 100644 index 662486f8ee..0000000000 --- a/changelog.d/1-api-changes/system-settings-endpoint +++ /dev/null @@ -1 +0,0 @@ -`GET /system/settings/unauthorized` returns a curated set of system settings from brig. The endpoint is reachable without authentication/authorization. It's meant to be used by apps to adjust their behavior (e.g. to show a registration dialog if registrations are enabled on the backend.) Currently, only the `setRestrictUserCreation` flag is exported. Other options may be added in future (in consultation with the security department.) diff --git a/changelog.d/2-features/disable-extra-nginz-upstreams-by-default b/changelog.d/2-features/disable-extra-nginz-upstreams-by-default deleted file mode 100644 index 9f12b86686..0000000000 --- a/changelog.d/2-features/disable-extra-nginz-upstreams-by-default +++ /dev/null @@ -1,10 +0,0 @@ -**Nginz helm chart**: The list of upstreams is split into `nginx_conf.upstreams` and -`nginx_conf.extra_upstreams`. Extra upstreams are disabled by default. They can -be enabled by adding their name (entry's key) to -`nginx_conf.enabled_extra_upstreams`. `nginx_conf.ignored_upstreams` is only -applied to upstreams from `nginx_conf.upstreams`. In the default configuration -of `nginz` extra upstreams are `ibis`, `galeb`, `calling-test` and `proxy`. If one -of those is deployed, its name has be be added to -`nginx_conf.enabled_extra_upstreams` (otherwise, it won't be reachable). Unless -`nginx_conf.upstreams` hasn't been changed manually (overriding its default), -this should be the only needed migration step. diff --git a/changelog.d/2-features/pr-2855 b/changelog.d/2-features/pr-2855 deleted file mode 100644 index d85440a577..0000000000 --- a/changelog.d/2-features/pr-2855 +++ /dev/null @@ -1 +0,0 @@ -A team member's role can now be provisioned via SCIM (#2851, #2855) diff --git a/changelog.d/2-features/pr-2895 b/changelog.d/2-features/pr-2895 deleted file mode 100644 index 6ff4a200ea..0000000000 --- a/changelog.d/2-features/pr-2895 +++ /dev/null @@ -1 +0,0 @@ -Team search endpoint now supports pagination (#2898, #2895) diff --git a/changelog.d/2-features/smtp-logging b/changelog.d/2-features/smtp-logging deleted file mode 100644 index 496d0aebdd..0000000000 --- a/changelog.d/2-features/smtp-logging +++ /dev/null @@ -1 +0,0 @@ -Add more logs to SMTP mail sending. Ensure that logs are written before the application fails due to SMTP misconfiguration. diff --git a/changelog.d/2-features/typing-for-federation b/changelog.d/2-features/typing-for-federation deleted file mode 100644 index 4ca5fedbf8..0000000000 --- a/changelog.d/2-features/typing-for-federation +++ /dev/null @@ -1 +0,0 @@ -Added typing indicator status progation to federated environments diff --git a/changelog.d/3-bug-fixes/2896 b/changelog.d/3-bug-fixes/2896 deleted file mode 100644 index 182d83ca6a..0000000000 --- a/changelog.d/3-bug-fixes/2896 +++ /dev/null @@ -1 +0,0 @@ -Fix typo for Servicemonitor enable var in default values for helm charts. diff --git a/changelog.d/3-bug-fixes/aws-error-message-parser-bug b/changelog.d/3-bug-fixes/aws-error-message-parser-bug deleted file mode 100644 index 9ec72cfb47..0000000000 --- a/changelog.d/3-bug-fixes/aws-error-message-parser-bug +++ /dev/null @@ -1 +0,0 @@ -The parser for the AWS/SNS error message to explain that an endpoint is already in use was incorrect. This lead to an "invalid token" error when registering push tokens for multiple user accounts (user ids) instead of updating the SNS endpoint with an additional user id. diff --git a/changelog.d/3-bug-fixes/client-deletion-ordering b/changelog.d/3-bug-fixes/client-deletion-ordering deleted file mode 100644 index 404d69af2f..0000000000 --- a/changelog.d/3-bug-fixes/client-deletion-ordering +++ /dev/null @@ -1 +0,0 @@ -Avoid client deletion edge case condition which can lead to inconsistent data between brig and galley's clients tables. diff --git a/changelog.d/3-bug-fixes/list-self-mls-not-configured b/changelog.d/3-bug-fixes/list-self-mls-not-configured deleted file mode 100644 index 74e6066571..0000000000 --- a/changelog.d/3-bug-fixes/list-self-mls-not-configured +++ /dev/null @@ -1 +0,0 @@ -Do not throw 500 when listing conversations and MLS is not configured diff --git a/changelog.d/3-bug-fixes/mls-self-conv-not-listed-below-v3 b/changelog.d/3-bug-fixes/mls-self-conv-not-listed-below-v3 deleted file mode 100644 index d656f28b45..0000000000 --- a/changelog.d/3-bug-fixes/mls-self-conv-not-listed-below-v3 +++ /dev/null @@ -1 +0,0 @@ -Do not list MLS self-conversation in client API v1 and v2 if it exists diff --git a/changelog.d/3-bug-fixes/pr-2870 b/changelog.d/3-bug-fixes/pr-2870 deleted file mode 100644 index 765f957fb3..0000000000 --- a/changelog.d/3-bug-fixes/pr-2870 +++ /dev/null @@ -1 +0,0 @@ -Prevention of storing unnecessary data in the database if adding a bot to a conversation fails. diff --git a/changelog.d/3-bug-fixes/pr-2968 b/changelog.d/3-bug-fixes/pr-2968 new file mode 100644 index 0000000000..e32c978a07 --- /dev/null +++ b/changelog.d/3-bug-fixes/pr-2968 @@ -0,0 +1 @@ +Fix pagination in team user search (make search key unique) diff --git a/changelog.d/3-bug-fixes/removal-client-check b/changelog.d/3-bug-fixes/removal-client-check deleted file mode 100644 index 6e62ac234b..0000000000 --- a/changelog.d/3-bug-fixes/removal-client-check +++ /dev/null @@ -1 +0,0 @@ -Fix bug in MLS user removal from conversation: the list of removed clients has to be compared with those in the conversation, not the list of *all* clients of that user diff --git a/changelog.d/3-bug-fixes/sftd-forwards-compat b/changelog.d/3-bug-fixes/sftd-forwards-compat deleted file mode 100644 index 0185b60a80..0000000000 --- a/changelog.d/3-bug-fixes/sftd-forwards-compat +++ /dev/null @@ -1 +0,0 @@ -Due to `sftd` changing how configuration is handled for "multi-SFT" calling (starting with version 3.1.10), new options have been added to the `sftd` Helm chart for compatibility with these newer versions. diff --git a/changelog.d/3-bug-fixes/sftd-restund-coturn-hostname-nodename b/changelog.d/3-bug-fixes/sftd-restund-coturn-hostname-nodename deleted file mode 100644 index fdf9bddc06..0000000000 --- a/changelog.d/3-bug-fixes/sftd-restund-coturn-hostname-nodename +++ /dev/null @@ -1 +0,0 @@ -For sftd/coturn/restund, fixed a bug in external ip address lookup, in case Kubernetes Node Name doesn't equal hostname. diff --git a/changelog.d/3-bug-fixes/token-client-bug b/changelog.d/3-bug-fixes/token-client-bug deleted file mode 100644 index da363e7686..0000000000 --- a/changelog.d/3-bug-fixes/token-client-bug +++ /dev/null @@ -1 +0,0 @@ -Requesting a new token with the client_id now works correctly when the old token is part of the request diff --git a/changelog.d/4-docs/add-proxy-support-to-deeplink b/changelog.d/4-docs/add-proxy-support-to-deeplink deleted file mode 100644 index 757d08b28c..0000000000 --- a/changelog.d/4-docs/add-proxy-support-to-deeplink +++ /dev/null @@ -1 +0,0 @@ -Add extra section to the deeplink docs to explain the socks proxy support while login. \ No newline at end of file diff --git a/changelog.d/4-docs/auth-cookie b/changelog.d/4-docs/auth-cookie deleted file mode 100644 index f14135f4b6..0000000000 --- a/changelog.d/4-docs/auth-cookie +++ /dev/null @@ -1 +0,0 @@ -Describe the auth cookie throttling mechanism. And overhaul the description of auth cookies in general. diff --git a/changelog.d/4-docs/pr-2889 b/changelog.d/4-docs/pr-2889 deleted file mode 100644 index a4f811ceb8..0000000000 --- a/changelog.d/4-docs/pr-2889 +++ /dev/null @@ -1 +0,0 @@ -PR guidelines docs are updated with correct helm configuration syntax diff --git a/changelog.d/4-docs/pr-2973 b/changelog.d/4-docs/pr-2973 new file mode 100644 index 0000000000..89fbeb8be6 --- /dev/null +++ b/changelog.d/4-docs/pr-2973 @@ -0,0 +1 @@ +Tool for dumping fed call graphs (dot/graphviz and csv); see README for details \ No newline at end of file diff --git a/changelog.d/5-internal/add-aws-sns-token-invalid-log b/changelog.d/5-internal/add-aws-sns-token-invalid-log deleted file mode 100644 index 7ca8ddf381..0000000000 --- a/changelog.d/5-internal/add-aws-sns-token-invalid-log +++ /dev/null @@ -1 +0,0 @@ -Log AWS / SNS invalid token responses. This is helpful for native push notification debugging purposes. diff --git a/changelog.d/5-internal/add-invitation-url-tests b/changelog.d/5-internal/add-invitation-url-tests deleted file mode 100644 index 1c00b99606..0000000000 --- a/changelog.d/5-internal/add-invitation-url-tests +++ /dev/null @@ -1 +0,0 @@ -Add tests for invitation urls in team invitation responses. These depend on the settings of galley. diff --git a/changelog.d/5-internal/buildah-drop-support b/changelog.d/5-internal/buildah-drop-support deleted file mode 100644 index 2985ad2882..0000000000 --- a/changelog.d/5-internal/buildah-drop-support +++ /dev/null @@ -1 +0,0 @@ -Remove support for compiling local docker images with buildah. Nix is used to build docker images these days diff --git a/changelog.d/5-internal/debugging-tools b/changelog.d/5-internal/debugging-tools deleted file mode 100644 index ffffed013e..0000000000 --- a/changelog.d/5-internal/debugging-tools +++ /dev/null @@ -1 +0,0 @@ -Nix-created docker images: add some debugging tools in the containers, and add 'make build-image-' for convenience diff --git a/changelog.d/5-internal/galley-servant-split b/changelog.d/5-internal/galley-servant-split deleted file mode 100644 index 450472e718..0000000000 --- a/changelog.d/5-internal/galley-servant-split +++ /dev/null @@ -1 +0,0 @@ -Split galley API routes and handler definitions into several modules diff --git a/changelog.d/5-internal/intra-listing b/changelog.d/5-internal/intra-listing deleted file mode 100644 index b5e726d22a..0000000000 --- a/changelog.d/5-internal/intra-listing +++ /dev/null @@ -1 +0,0 @@ -Default intraListing to true. This means that the list of clients, so far saved in both brig's and galley's databases, will still be written to both, but only read from brig's database. This avoids cases where these two tables go out of sync. Brig becomes the source of truth for clients. In the future, if this holds, code and data for galley's clients table can be removed. diff --git a/changelog.d/5-internal/nginz-nix b/changelog.d/5-internal/nginz-nix deleted file mode 100644 index 4ff00f8ac4..0000000000 --- a/changelog.d/5-internal/nginz-nix +++ /dev/null @@ -1 +0,0 @@ -Build nginz and nginz_disco docker images using nix diff --git a/changelog.d/5-internal/nixpkgs-bump b/changelog.d/5-internal/nixpkgs-bump deleted file mode 100644 index 86b659bfcb..0000000000 --- a/changelog.d/5-internal/nixpkgs-bump +++ /dev/null @@ -1 +0,0 @@ -Bump nixpkgs to latest unstable. Stop using forked nixpkgs. \ No newline at end of file diff --git a/changelog.d/5-internal/polysemy-oom b/changelog.d/5-internal/polysemy-oom deleted file mode 100644 index 82b4530ebc..0000000000 --- a/changelog.d/5-internal/polysemy-oom +++ /dev/null @@ -1 +0,0 @@ -Reduce Polysemy-induced high memory requirements diff --git a/changelog.d/5-internal/pr-2815 b/changelog.d/5-internal/pr-2815 deleted file mode 100644 index 4462cf30ca..0000000000 --- a/changelog.d/5-internal/pr-2815 +++ /dev/null @@ -1 +0,0 @@ -Brig calling API is now migrated to servant diff --git a/changelog.d/5-internal/pr-2823 b/changelog.d/5-internal/pr-2823 deleted file mode 100644 index 49626890f6..0000000000 --- a/changelog.d/5-internal/pr-2823 +++ /dev/null @@ -1 +0,0 @@ -Fixed flaky feature TTL integration test diff --git a/changelog.d/5-internal/pr-2824 b/changelog.d/5-internal/pr-2824 deleted file mode 100644 index ae0e234fee..0000000000 --- a/changelog.d/5-internal/pr-2824 +++ /dev/null @@ -1 +0,0 @@ -Brig teams API is now migrated to servant diff --git a/changelog.d/5-internal/pr-2840 b/changelog.d/5-internal/pr-2840 deleted file mode 100644 index 70a1375288..0000000000 --- a/changelog.d/5-internal/pr-2840 +++ /dev/null @@ -1 +0,0 @@ -Add 'inconsistencies' tool to check for, and repair certain kinds of data inconsistencies across different cassandra tables. diff --git a/changelog.d/5-internal/pr-2846 b/changelog.d/5-internal/pr-2846 deleted file mode 100644 index 700a8a5d8d..0000000000 --- a/changelog.d/5-internal/pr-2846 +++ /dev/null @@ -1 +0,0 @@ -Backoffice Swagger 2.x docs is exposed on `/` and the old Swagger has been removed. Backoffice helm chart only runs stern without an extra nginx. diff --git a/changelog.d/5-internal/pr-2850 b/changelog.d/5-internal/pr-2850 deleted file mode 100644 index 91dd2564ff..0000000000 --- a/changelog.d/5-internal/pr-2850 +++ /dev/null @@ -1 +0,0 @@ -Stern API endpoint `GET ejpd-info` has now the correct HTTP method diff --git a/changelog.d/5-internal/pr-2852 b/changelog.d/5-internal/pr-2852 deleted file mode 100644 index eff5f1bc2b..0000000000 --- a/changelog.d/5-internal/pr-2852 +++ /dev/null @@ -1 +0,0 @@ -External commits: add additional checks diff --git a/changelog.d/5-internal/pr-2861 b/changelog.d/5-internal/pr-2861 deleted file mode 100644 index 226e80aa78..0000000000 --- a/changelog.d/5-internal/pr-2861 +++ /dev/null @@ -1 +0,0 @@ -Golden tests for conversation and feature config event schemas diff --git a/changelog.d/5-internal/pr-2878 b/changelog.d/5-internal/pr-2878 deleted file mode 100644 index ad5ba5d55e..0000000000 --- a/changelog.d/5-internal/pr-2878 +++ /dev/null @@ -1 +0,0 @@ -Add startup probe to brig helm chart. diff --git a/changelog.d/5-internal/refactor-mls-message b/changelog.d/5-internal/refactor-mls-message deleted file mode 100644 index 6cbf9538d5..0000000000 --- a/changelog.d/5-internal/refactor-mls-message +++ /dev/null @@ -1 +0,0 @@ -Refactor and simplify MLS message handling logic diff --git a/changelog.d/5-internal/remove-hashed-key-queries b/changelog.d/5-internal/remove-hashed-key-queries deleted file mode 100644 index fb1e94dd5d..0000000000 --- a/changelog.d/5-internal/remove-hashed-key-queries +++ /dev/null @@ -1 +0,0 @@ -Remove cassandra queries to the user_keys_hash table, as they are never read anymore since 'onboarding' / auto-connect was removed in https://github.com/wireapp/wire-server/pull/1005 diff --git a/changelog.d/5-internal/replay-backend-proposals b/changelog.d/5-internal/replay-backend-proposals deleted file mode 100644 index 4430e0c96a..0000000000 --- a/changelog.d/5-internal/replay-backend-proposals +++ /dev/null @@ -1,2 +0,0 @@ -Replay external backend proposals after forwarding external commits. -One column added to Galley's mls_proposal_refs. diff --git a/changelog.d/5-internal/subconv-types b/changelog.d/5-internal/subconv-types deleted file mode 100644 index 77b3a4836b..0000000000 --- a/changelog.d/5-internal/subconv-types +++ /dev/null @@ -1 +0,0 @@ -Introduce types for subconversations diff --git a/changelog.d/5-internal/treefmt b/changelog.d/5-internal/treefmt deleted file mode 100644 index e3e735311a..0000000000 --- a/changelog.d/5-internal/treefmt +++ /dev/null @@ -1 +0,0 @@ -Use treefmt to ensure consistent formatting of .nix files, use for shellcheck too (#2831) diff --git a/changelog.d/6-federation/mls-flag-brig b/changelog.d/6-federation/mls-flag-brig deleted file mode 100644 index 5ce7c8417e..0000000000 --- a/changelog.d/6-federation/mls-flag-brig +++ /dev/null @@ -1 +0,0 @@ -Honour MLS flag in brig's federation API diff --git a/changelog.d/6-federation/split-msg-send-reqs b/changelog.d/6-federation/split-msg-send-reqs deleted file mode 100644 index 6d888a9c80..0000000000 --- a/changelog.d/6-federation/split-msg-send-reqs +++ /dev/null @@ -1 +0,0 @@ -Split the Proteus and MLS message sending requests into separate types. The MLS request now supports MLS subconversations. This is a federation API breaking change. diff --git a/charts/brig/templates/configmap.yaml b/charts/brig/templates/configmap.yaml index c6e9102dd6..a743452c1a 100644 --- a/charts/brig/templates/configmap.yaml +++ b/charts/brig/templates/configmap.yaml @@ -310,6 +310,9 @@ data: {{- end }} {{- if .setOAuthEnabled }} setOAuthEnabled: {{ .setOAuthEnabled }} - {{- end }} + {{- end }} + {{- if .setDisabledAPIVersions }} + setDisabledAPIVersions: {{ .setDisabledAPIVersions }} + {{- end }} {{- end }} {{- end }} diff --git a/charts/brig/values.yaml b/charts/brig/values.yaml index 787c0809fd..c4efe552d5 100644 --- a/charts/brig/values.yaml +++ b/charts/brig/values.yaml @@ -90,6 +90,9 @@ config: setOAuthAuthCodeExpirationTimeSecs: 300 # 5 minutes setOAuthAccessTokenExpirationTimeSecs: 1814400 # 3 weeks setOAuthEnabled: true + # Disable one ore more API versions. Please make sure the configuration value is the same in all these charts: + # brig, cannon, cargohold, galley, gundeck, proxy, spar. + # setDisabledAPIVersions: [ 3 ] smtp: passwordFile: /etc/wire/brig/secrets/smtp-password.txt proxy: {} diff --git a/charts/cannon/templates/configmap.yaml b/charts/cannon/templates/configmap.yaml index 256dae79e4..940d601306 100644 --- a/charts/cannon/templates/configmap.yaml +++ b/charts/cannon/templates/configmap.yaml @@ -19,6 +19,10 @@ data: millisecondsBetweenBatches: {{ .Values.config.drainOpts.millisecondsBetweenBatches }} minBatchSize: {{ .Values.config.drainOpts.minBatchSize }} + {{- if .Values.config.disabledAPIVersions }} + disabledAPIVersions: {{ .Values.config.disabledAPIVersions }} + {{- end }} + kind: ConfigMap metadata: name: cannon diff --git a/charts/cannon/values.yaml b/charts/cannon/values.yaml index 41f0c89106..9142603160 100644 --- a/charts/cannon/values.yaml +++ b/charts/cannon/values.yaml @@ -22,6 +22,10 @@ config: millisecondsBetweenBatches: 50 minBatchSize: 20 + # Disable one ore more API versions. Please make sure the configuration value is the same in all these charts: + # brig, cannon, cargohold, galley, gundeck, proxy, spar. + # disabledAPIVersions: [ 3 ] + metrics: serviceMonitor: enabled: false diff --git a/charts/cargohold/templates/configmap.yaml b/charts/cargohold/templates/configmap.yaml index 5ceadf367c..5f6cd7cbc4 100644 --- a/charts/cargohold/templates/configmap.yaml +++ b/charts/cargohold/templates/configmap.yaml @@ -28,6 +28,9 @@ data: {{- if .s3Compatibility }} s3Compatibility: {{ .s3Compatibility }} {{- end }} + {{- if .s3AddressingStyle }} + s3AddressingStyle: {{ .s3AddressingStyle }} + {{- end }} {{ if .cloudFront }} cloudFront: domain: {{ .cloudFront.domain }} @@ -38,7 +41,14 @@ data: settings: {{- with .Values.config.settings }} - maxTotalBytes: 5368709120 - downloadLinkTTL: 300 # Seconds + {{- if .maxTotalBytes }} + maxTotalBytes: {{ .maxTotalBytes }} + {{- end }} + {{- if .downloadLinkTTL }} + downloadLinkTTL: {{ .downloadLinkTTL }} + {{- end }} federationDomain: {{ .federationDomain }} + {{- if .disabledAPIVersions }} + disabledAPIVersions: {{ .disabledAPIVersions }} + {{- end }} {{- end }} diff --git a/charts/cargohold/values.yaml b/charts/cargohold/values.yaml index 76a59b0811..5445d1bc23 100644 --- a/charts/cargohold/values.yaml +++ b/charts/cargohold/values.yaml @@ -23,6 +23,13 @@ config: region: "eu-west-1" s3Bucket: assets proxy: {} + settings: + maxTotalBytes: 5368709120 + downloadLinkTTL: 300 # Seconds + # Disable one ore more API versions. Please make sure the configuration value is the same in all these charts: + # brig, cannon, cargohold, galley, gundeck, proxy, spar. + # disabledAPIVersions: [ 3 ] + serviceAccount: # When setting this to 'false', either make sure that a service account named # 'cargohold' exists or change the 'name' field to 'default' diff --git a/charts/coturn/templates/configmap-coturn-conf-template.yaml b/charts/coturn/templates/configmap-coturn-conf-template.yaml index 4a2a4c4c06..b981c3cce9 100644 --- a/charts/coturn/templates/configmap-coturn-conf-template.yaml +++ b/charts/coturn/templates/configmap-coturn-conf-template.yaml @@ -13,6 +13,9 @@ data: {{- if .Values.tls.enabled }} cert=/secrets-tls/tls.crt pkey=/secrets-tls/tls.key + {{- if .Values.tls.ciphers }} + cipher-list={{ .Values.tls.ciphers }} + {{- end }} {{- else }} no-tls {{- end }} diff --git a/charts/coturn/values.yaml b/charts/coturn/values.yaml index eede1626be..d56986c6e0 100644 --- a/charts/coturn/values.yaml +++ b/charts/coturn/values.yaml @@ -28,6 +28,8 @@ coturnTurnTlsListenPort: 5349 tls: enabled: false + # compliant with BSI TR-02102-2 + ciphers: 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384' secretRef: reloaderImage: # container image containing https://github.com/Pluies/config-reloader-sidecar diff --git a/charts/galley/templates/configmap.yaml b/charts/galley/templates/configmap.yaml index e5a4f7864a..a761fb24fd 100644 --- a/charts/galley/templates/configmap.yaml +++ b/charts/galley/templates/configmap.yaml @@ -69,6 +69,9 @@ data: ed25519: "/etc/wire/galley/secrets/removal_ed25519.pem" {{- end }} {{- end -}} + {{- if .settings.disabledAPIVersions }} + disabledAPIVersions: {{ .settings.disabledAPIVersions }} + {{- end }} {{- if .settings.featureFlags }} featureFlags: sso: {{ .settings.featureFlags.sso }} diff --git a/charts/galley/values.yaml b/charts/galley/values.yaml index 7e20021638..8f260a0abe 100644 --- a/charts/galley/values.yaml +++ b/charts/galley/values.yaml @@ -32,6 +32,9 @@ config: # Before making indexedBillingTeamMember true while upgrading, please # refer to notes here: https://github.com/wireapp/wire-server-deploy/releases/tag/v2020-05-15 indexedBillingTeamMember: false + # Disable one ore more API versions. Please make sure the configuration value is the same in all these charts: + # brig, cannon, cargohold, galley, gundeck, proxy, spar. + # disabledAPIVersions: [ 3 ] featureFlags: # see #RefConfigOptions in `/docs/reference` (https://github.com/wireapp/wire-server/) appLock: defaults: diff --git a/charts/gundeck/templates/configmap.yaml b/charts/gundeck/templates/configmap.yaml index d2b9a18ccc..2349e68cc4 100644 --- a/charts/gundeck/templates/configmap.yaml +++ b/charts/gundeck/templates/configmap.yaml @@ -53,6 +53,10 @@ data: {{- if hasKey . "perNativePushConcurrency" }} perNativePushConcurrency: {{ .perNativePushConcurrency }} {{- end }} + {{- if .disabledAPIVersions }} + disabledAPIVersions: {{ .disabledAPIVersions }} + {{- end }} + # disabledAPIVersions: [ 2 ] maxConcurrentNativePushes: soft: {{ .maxConcurrentNativePushes.soft }} {{- if hasKey .maxConcurrentNativePushes "hard" }} diff --git a/charts/gundeck/values.yaml b/charts/gundeck/values.yaml index 83ed95df1a..3f8a547229 100644 --- a/charts/gundeck/values.yaml +++ b/charts/gundeck/values.yaml @@ -35,6 +35,10 @@ config: # perNativePushConcurrency: 32 maxConcurrentNativePushes: soft: 1000 + # Disable one ore more API versions. Please make sure the configuration value is the same in all these charts: + # brig, cannon, cargohold, galley, gundeck, proxy, spar. + # disabledAPIVersions: [ 3 ] + serviceAccount: # When setting this to 'false', either make sure that a service account named # 'gundeck' exists or change the 'name' field to 'default' diff --git a/charts/inbucket/values.yaml b/charts/inbucket/values.yaml index 626051a534..3bff990c7e 100644 --- a/charts/inbucket/values.yaml +++ b/charts/inbucket/values.yaml @@ -1,6 +1,6 @@ # Fully qualified domain name (FQDN) of the domain where to serve inbucket. # E.g. 'inbucket.my-test-env.wire.link' -host: +host: "inbucket.example.com" # Configure the inbucket "parent" chart inbucket: diff --git a/charts/proxy/templates/configmap.yaml b/charts/proxy/templates/configmap.yaml index 5af2ebe10c..5464879752 100644 --- a/charts/proxy/templates/configmap.yaml +++ b/charts/proxy/templates/configmap.yaml @@ -7,7 +7,9 @@ data: logFormat: {{ .Values.config.logFormat }} logLevel: {{ .Values.config.logLevel }} logNetStrings: {{ .Values.config.logNetStrings }} - + {{- if .Values.config.disabledAPIVersions }} + disabledAPIVersions: {{ .Values.config.disabledAPIVersions }} + {{- end }} host: 0.0.0.0 port: {{ .Values.service.internalPort }} httpPoolSize: 1000 diff --git a/charts/proxy/values.yaml b/charts/proxy/values.yaml index 2e527e91db..6dd53032a9 100644 --- a/charts/proxy/values.yaml +++ b/charts/proxy/values.yaml @@ -19,3 +19,6 @@ config: logFormat: StructuredJSON logNetStrings: false proxy: {} + # Disable one ore more API versions. Please make sure the configuration value is the same in all these charts: + # brig, cannon, cargohold, galley, gundeck, proxy, spar. + # disabledAPIVersions: [ 3 ] diff --git a/charts/spar/templates/configmap.yaml b/charts/spar/templates/configmap.yaml index 2a195f7487..98711a4679 100644 --- a/charts/spar/templates/configmap.yaml +++ b/charts/spar/templates/configmap.yaml @@ -33,6 +33,10 @@ data: maxScimTokens: {{ .maxScimTokens }} + {{- if .disabledAPIVersions }} + disabledAPIVersions: {{ .disabledAPIVersions }} + {{- end }} + saml: version: SAML2.0 logLevel: {{ .logLevel }} @@ -43,5 +47,5 @@ data: spSsoUri: {{ .ssoUri }} contacts: -{{ toYaml .contacts | indent 12 }} + {{- toYaml .contacts | nindent 8 }} {{- end }} diff --git a/charts/spar/values.yaml b/charts/spar/values.yaml index f378ebdc96..c2023b6634 100644 --- a/charts/spar/values.yaml +++ b/charts/spar/values.yaml @@ -25,3 +25,6 @@ config: maxttlAuthreq: 7200 maxttlAuthresp: 7200 proxy: {} + # Disable one ore more API versions. Please make sure the configuration value is the same in all these charts: + # brig, cannon, cargohold, galley, gundeck, proxy, spar. + # disabledAPIVersions: [ 3 ] diff --git a/docs/convert/compare_screenshots.py b/docs/convert/compare_screenshots.py new file mode 100644 index 0000000000..c5b4d9eca1 --- /dev/null +++ b/docs/convert/compare_screenshots.py @@ -0,0 +1,16 @@ +#!/usr/bin/env sh + +import subprocess +import os + +output = subprocess.check_output(['find', 'screenshots', '-name', '*_dev.png']).decode('utf8') + +for dev in output.splitlines(): + ref = dev.replace('_dev.png', '_ref.png') + if os.path.exists(dev) and os.path.exists(ref): + print(dev) + cmd = ['compare', '-compose', 'src', dev, ref, dev.replace('_dev.png', '_diff.png')] + print(cmd) + subprocess.run(cmd) + else: + print(f'Cannot compare {dev}') diff --git a/docs/convert/config.yaml b/docs/convert/config.yaml new file mode 100644 index 0000000000..78f2c64c8f --- /dev/null +++ b/docs/convert/config.yaml @@ -0,0 +1 @@ +colon_fences: false diff --git a/docs/convert/conversions.yaml b/docs/convert/conversions.yaml new file mode 100644 index 0000000000..cbeb844cb6 --- /dev/null +++ b/docs/convert/conversions.yaml @@ -0,0 +1 @@ +sphinx.domains.std.Glossary: eval_rst diff --git a/docs/convert/convert.sh b/docs/convert/convert.sh new file mode 100644 index 0000000000..11a4a33fe7 --- /dev/null +++ b/docs/convert/convert.sh @@ -0,0 +1,12 @@ +#!/usr/bin/env bash +# +set -e +# shellcheck disable=SC2044,SC3010 +for f in $(find . -type f -name '*.rst'); do + if [[ "$f" == */includes/* ]]; then + echo skipping "$f" + continue + fi + rst2myst convert -c convert/conversions.yaml --no-colon-fences "$f" + rm -f "$f" +done diff --git a/docs/convert/revert.sh b/docs/convert/revert.sh new file mode 100644 index 0000000000..df5cf912b3 --- /dev/null +++ b/docs/convert/revert.sh @@ -0,0 +1,4 @@ +#!/usr/bin/env sh + +git checkout src +git clean src -f diff --git a/docs/convert/screenshots.py b/docs/convert/screenshots.py new file mode 100644 index 0000000000..ff172710d5 --- /dev/null +++ b/docs/convert/screenshots.py @@ -0,0 +1,47 @@ +#!/usr/bin/env python3 + +from selenium import webdriver +from selenium.webdriver.common.keys import Keys +from selenium.webdriver.common.by import By +import subprocess +import os.path + +def sanitize_name(name): + r = '' + for c in name: + if c.isalpha(): + r += c + else: + r += '_' + return r + +driver = webdriver.Firefox() + +output = subprocess.check_output(['find', 'build', '-name', '*.html']).decode('utf8') +for i, p in enumerate(output.splitlines()): + n = os.path.relpath(p, 'build') + url_dev = f'http://localhost:3000/{n}' + url_ref = f'https://docs.wire.com/{n}' + img_basename = sanitize_name(n) + '_' + str(i) + + try: + print(f'./screenshots/{i:03}-{img_basename}_dev.png') + driver.get(url_dev) + driver.get_full_page_screenshot_as_file(f'./screenshots/{i:03}-{img_basename}_dev.png') + print(url_ref) + driver.get(url_ref) + driver.get_full_page_screenshot_as_file(f'./screenshots/{i:03}-{img_basename}_ref.png') + except: + pass + +driver.close() + + + +# assert "Python" in driver.title +# elem = driver.find_element(By.NAME, "q") +# elem.clear() +# elem.send_keys("pycon") +# elem.send_keys(Keys.RETURN) +# assert "No results found." not in driver.page_source +# diff --git a/docs/convert/shell.nix b/docs/convert/shell.nix new file mode 100644 index 0000000000..130c456c6d --- /dev/null +++ b/docs/convert/shell.nix @@ -0,0 +1,17 @@ +{ pkgs ? import {} }: +(pkgs.buildFHSUserEnv { + name = "pipzone"; + targetPkgs = pkgs: (with pkgs; [ + python3 + python3Packages.pip + python3Packages.virtualenv + ]); + runScript = "bash"; +}).env + +# then +# virtualenv venv +# pip install rst-to-myst +# Fix this bug locally: https://github.com/executablebooks/rst-to-myst/issues/49 +# pip install sphinx-reredirects +# pip install sphinx-multiversion diff --git a/docs/src/_static/css/wire.css b/docs/src/_static/css/wire.css index a28bd8b810..0013ea98e3 100644 --- a/docs/src/_static/css/wire.css +++ b/docs/src/_static/css/wire.css @@ -221,10 +221,6 @@ footer div{ background-color: #c9c9c9; } -.wy-nav-content { - max-width: unset; -} - .wy-nav-top { background-color: #fafafa; color: #34383b; @@ -240,4 +236,4 @@ footer div{ .wy-side-nav-search a:hover { color: #05498f; -} */ \ No newline at end of file +} */ diff --git a/docs/src/conf.py b/docs/src/conf.py index e7c36d04e6..ee4d992b35 100644 --- a/docs/src/conf.py +++ b/docs/src/conf.py @@ -113,6 +113,13 @@ html_favicon = '_static/favicon/favicon.ico' html_logo = '_static/image/Wire_logo.svg' +html_context = { + 'display_github': True, + 'github_user': 'wireapp', + 'github_repo': 'wire-server', + 'github_version': 'develop/docs/src/', +} + smv_tag_whitelist = '' smv_branch_whitelist = r'^(install-with-poetry)$' smv_remote_whitelist = r'^(origin)$' @@ -128,6 +135,5 @@ "security-responses/log4shell": "2021-12-15_log4shell.html", "security-responses/cve-2021-44521": "2022-02-21_cve-2021-44521.html", "security-responses/2022-05_website_outage": "2022-05-23_website_outage.html", - "how-to/single-sign-on/index": "../../understand/single-sign-on/main.html#setting-up-sso-externally", - "how-to/scim/index": "../../understand/single-sign-on/main.html#user-provisioning", + "how-to/scim/index": "../../understand/single-sign-on/main.html#user-provisioning" } diff --git a/docs/src/configuration-options.md b/docs/src/configuration-options.md new file mode 100644 index 0000000000..647ac5f0ee --- /dev/null +++ b/docs/src/configuration-options.md @@ -0,0 +1,1048 @@ +(configuration-options)= + +# Part 3 - configuration options in a production setup + +This contains instructions to configure specific aspects of your production setup depending on your needs. + +Depending on your use-case and requirements, you may need to +configure none, or only a subset of the following sections. + +## Redirect some traffic through a http(s) proxy + +In case you wish to use http(s) proxies, you can add a configuration like this to the wire-server services in question: + +Assuming your proxy can be reached from within Kubernetes at `http://proxy:8080`, add the following for each affected service (e.g. `gundeck`) to your Helm overrides in `values/wire-server/values.yaml` : + +```yaml +gundeck: + # ... + config: + # ... + proxy: + httpProxy: "http://proxy:8080" + httpsProxy: "http://proxy:8080" + noProxyList: + - "localhost" + - "127.0.0.1" + - "10.0.0.0/8" + - "elasticsearch-external" + - "cassandra-external" + - "redis-ephemeral" + - "fake-aws-sqs" + - "fake-aws-dynamodb" + - "fake-aws-sns" + - "brig" + - "cargohold" + - "galley" + - "gundeck" + - "proxy" + - "spar" + - "federator" + - "cannon" + - "cannon-0.cannon.default" + - "cannon-1.cannon.default" + - "cannon-2.cannon.default" +``` + +Depending on your setup, you may need to repeat this for the other services like `brig` as well. + +(push-sns)= + +## Enable push notifications using the public appstore / playstore mobile Wire clients + +1. You need to get in touch with us. Please talk to sales or customer support - see +2. If a contract agreement has been reached, we can set up a separate AWS account for you containing the necessary AWS SQS/SNS setup to route push notifications through to the mobile apps. We will then forward some configuration / access credentials that looks like: + +```yaml +gundeck: + config: + aws: + account: "" + arnEnv: "" + queueName: "-gundeck-events" + region: "" + snsEndpoint: "https://sns..amazonaws.com" + sqsEndpoint: "https://sqs..amazonaws.com" + secrets: + awsKeyId: "" + awsSecretKey: "" +``` + +To make use of those, first test the credentials are correct, e.g. using the `aws` command-line tool (for more information on how to configure credentials, please refer to the [official docs](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-precedence)): + +``` +AWS_REGION= +AWS_ACCESS_KEY_ID=<...> +AWS_SECRET_ACCESS_KEY=<...> +ENV= #e.g staging + +aws sqs get-queue-url --queue-name "$ENV-gundeck-events" +``` + +You should get a result like this: + +``` +{ + "QueueUrl": "https://.queue.amazonaws.com//-gundeck-events" +} +``` + +Then add them to your gundeck configuration overrides. + +Keys below `gundeck.config` belong into `values/wire-server/values.yaml`: + +```yaml +gundeck: + # ... + config: + aws: + queueName: # e.g. staging-gundeck-events + account: # , e.g. 123456789 + region: # e.g. eu-central-1 + snsEndpoint: # e.g. https://sns.eu-central-1.amazonaws.com + sqsEndpoint: # e.g. https://sqs.eu-central-1.amazonaws.com + arnEnv: # e.g. staging - this must match the environment name (first part of queueName) +``` + +Keys below `gundeck.secrets` belong into `values/wire-server/secrets.yaml`: + +```yaml +gundeck: + # ... + secrets: + awsKeyId: CHANGE-ME + awsSecretKey: CHANGE-ME +``` + +After making this change and applying it to gundeck (ensure gundeck pods have restarted to make use of the updated configuration - that should happen automatically), make sure to reset the push token on any mobile devices that you may have in use. + +Next, if you want, you can stop using the `fake-aws-sns` pods in case you ran them before: + +```yaml +# inside override values/fake-aws/values.yaml +fake-aws-sns: + enabled: false +``` + +## Controlling the speed of websocket draining during cannon pod replacement + +The 'cannon' component is responsible for persistent websocket connections. +Normally the default options would slowly and gracefully drain active websocket +connections over a maximum of `(amount of cannon replicas * 30 seconds)` during +the deployment of a new wire-server version. This will lead to a very brief +interruption for Wire clients when their client has to re-connect on the +websocket. + +You're not expected to need to change these settings. + +The following options are only relevant during the restart of cannon itself. +During a restart of nginz or ingress-controller, all websockets will get +severed. If this is to be avoided, see section {ref}`separate-websocket-traffic` + +`drainOpts`: Drain websockets in a controlled fashion when cannon receives a +SIGTERM or SIGINT (this happens when a pod is terminated e.g. during rollout +of a new version). Instead of waiting for connections to close on their own, +the websockets are now severed at a controlled pace. This allows for quicker +rollouts of new versions. + +There is no way to entirely disable this behaviour, two extreme examples below + +- the quickest way to kill cannon is to set `gracePeriodSeconds: 1` and + `minBatchSize: 100000` which would sever all connections immediately; but it's + not recommended as you could DDoS yourself by forcing all active clients to + reconnect at the same time. With this, cannon pod replacement takes only 1 + second per pod. +- the slowest way to roll out a new version of cannon without severing websocket + connections for a long time is to set `minBatchSize: 1`, + `millisecondsBetweenBatches: 86400000` and `gracePeriodSeconds: 86400` + which would lead to one single websocket connection being closed immediately, + and all others only after 1 day. With this, cannon pod replacement takes a + full day per pod. + +```yaml +# overrides for wire-server/values.yaml +cannon: + drainOpts: + # The following defaults drain a minimum of 400 connections/second + # for a total of 10000 over 25 seconds + # (if cannon holds more connections, draining will happen at a faster pace) + gracePeriodSeconds: 25 + millisecondsBetweenBatches: 50 + minBatchSize: 20 +``` + +## Control nginz upstreams (routes) into the Kubernetes cluster + +Open unterminated upstreams (routes) into the Kubernetes cluster are a potential +security issue. To prevent this, there are fine-grained settings in the nginz +configuration defining which upstreams should exist. + +### Default upstreams + +Upstreams for services that exist in (almost) every Wire installation are +enabled by default. These are: + +- `brig` +- `cannon` +- `cargohold` +- `galley` +- `gundeck` +- `spar` + +For special setups (as e.g. described in [separate-websocket-traffic]) the +upstreams of these services can be ignored (disabled) with the setting +`nginz.nginx_conf.ignored_upstreams`. + +The most common example is to disable the upstream of `cannon`: + +```yaml +nginz: + nginx_conf: + ignored_upstreams: ["cannon"] +``` + +### Optional upstreams + +There are some services that are usually not deployed on most Wire installations +or are specific to the Wire cloud: + +- `ibis` +- `galeb` +- `calling-test` +- `proxy` + +The upstreams for those are disabled by default and can be enabled by the +setting `nginz.nginx_conf.enabled_extra_upstreams`. + +The most common example is to enable the (extra) upstream of `proxy`: + +```yaml +nginz: + nginx_conf: + enabled_extra_upstreams: ["proxy"] +``` + +### Combining default and extra upstream configurations + +Default and extra upstream configurations are independent of each other. I.e. +`nginz.nginx_conf.ignored_upstreams` and +`nginz.nginx_conf.enabled_extra_upstreams` can be combined in the same +configuration: + +```yaml +nginz: + nginx_conf: + ignored_upstreams: ["cannon"] + enabled_extra_upstreams: ["proxy"] +``` + +(separate-websocket-traffic)= + +## Separate incoming websocket network traffic from the rest of the https traffic + +By default, incoming network traffic for websockets comes through these network +hops: + +Internet -> LoadBalancer -> kube-proxy -> nginx-ingress-controller -> nginz -> cannon + +In order to have graceful draining of websockets when something gets restarted, as it is not easily +possible to implement the graceful draining on nginx-ingress-controller or nginz by itself, there is +a configuration option to get the following network hops: + +Internet -> separate LoadBalancer for cannon only -> kube-proxy -> \[nginz->cannon (2 containers in the same pod)\] + +```yaml +# example on AWS when using cert-manager for TLS certificates and external-dns for DNS records +# (see wire-server/charts/cannon/values.yaml for more possible options) + +# in your wire-server/values.yaml overrides: +cannon: + service: + nginz: + enabled: true + hostname: "nginz-ssl.example.com" + externalDNS: + enabled: true + certManager: + enabled: true + annotations: + service.beta.kubernetes.io/aws-load-balancer-type: "nlb" + service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing" +nginz: + nginx_conf: + ignored_upstreams: ["cannon"] +``` + +```yaml +# in your wire-server/secrets.yaml overrides: +cannon: + secrets: + nginz: + zAuth: + publicKeys: ... # same values as in nginz.secrets.zAuth.publicKeys +``` + +```yaml +# in your nginx-ingress-services/values.yaml overrides: +websockets: + enabled: false +``` + +## Blocking creation of personal users, new teams + +### In Brig + +There are some unauthenticated end-points that allow arbitrary users on the open internet to do things like create a new team. This is desired in the cloud, but if you run an on-prem setup that is open to the world, you may want to block this. + +Brig has a server option for this: + +```yaml +optSettings: + setRestrictUserCreation: true +``` + +If `setRestrictUserCreation` is `true`, creating new personal users or new teams on your instance from outside your backend installation is impossible. (If you want to be more technical: requests to `/register` that create a new personal account or a new team are answered with `403 forbidden`.) + +On instances with restricted user creation, the site operator with access to the internal REST API can still circumvent the restriction: just log into a brig service pod via ssh and follow the steps in `hack/bin/create_test_team_admins.sh.` + +```{note} +Once the creation of new users and teams has been disabled, it will still be possible to use the [team creation process](https://support.wire.com/hc/en-us/articles/115003858905-Create-a-team) (enter the new team name, email, password, etc), but it will fail/refuse creation late in the creation process (after the «Create team» button is clicked). +``` + +### In the WebApp + +Another way of disabling user registration is by this webapp setting, in `values.yaml`, changing this value from `true` to `false`: + +```yaml +FEATURE_ENABLE_ACCOUNT_REGISTRATION: "false" +``` + +```{note} +If you only disable the creation of users in the webapp, but do not do so in Brig/the backend, a malicious user would be able to use the API to create users, so make sure to disable both. +``` + +## You may want + +- more server resources to ensure + [high-availability](#persistence-and-high-availability) +- an email/SMTP server to send out registration emails +- depending on your required functionality, you may or may not need an + [AWS account](https://aws.amazon.com/). See details about + limitations without an AWS account in the following sections. +- one or more people able to maintain the installation +- official support by Wire ([contact us](https://wire.com/pricing/)) + +```{warning} +As of 2020-08-10, the documentation sections below are partially out of date and need to be updated. +``` + +## Metrics/logging + +- {ref}`monitoring` +- {ref}`logging` + +## SMTP server + +**Assumptions**: none + +**Provides**: + +- full control over email sending + +**You need**: + +- SMTP credentials (to allow for email sending; prerequisite for + registering users and running the smoketest) + +**How to configure**: + +- *if using a gmail account, ensure to enable* ['less secure + apps'](https://support.google.com/accounts/answer/6010255?hl=en) +- Add user, SMTP server, connection type to `values/wire-server`'s + values file under `brig.config.smtp` +- Add password in `secrets/wire-server`'s secrets file under + `brig.secrets.smtpPassword` + +## Load balancer on bare metal servers + +**Assumptions**: + +- You installed kubernetes on bare metal servers or virtual machines + that can bind to a public IP address. +- **If you are using AWS or another cloud provider, see**[Creating a + cloudprovider-based load + balancer](#load-balancer-on-cloud-provider)**instead** + +**Provides**: + +- Allows using a provided Load balancer for incoming traffic +- SSL termination is done on the ingress controller +- You can access your wire-server backend with given DNS names, over + SSL and from anywhere in the internet + +**You need**: + +- A kubernetes node with a *public* IP address (or internal, if you do + not plan to expose the Wire backend over the Internet but we will + assume you are using a public IP address) + +- DNS records for the different exposed addresses (the ingress depends + on the usage of virtual hosts), namely: + + - `nginz-https.` + - `nginz-ssl.` + - `assets.` + - `webapp.` + - `account.` + - `teams.` + +- A wildcard certificate for the different hosts (`*.`) - we + assume you want to do SSL termination on the ingress controller + +**Caveats**: + +- Note that there can be only a *single* load balancer, otherwise your + cluster might become + [unstable](https://metallb.universe.tf/installation/) + +**How to configure**: + +``` +cp values/metallb/demo-values.example.yaml values/metallb/demo-values.yaml +cp values/nginx-ingress-services/demo-values.example.yaml values/nginx-ingress-services/demo-values.yaml +cp values/nginx-ingress-services/demo-secrets.example.yaml values/nginx-ingress-services/demo-secrets.yaml +``` + +- Adapt `values/metallb/demo-values.yaml` to provide a list of public + IP address CIDRs that your kubernetes nodes can bind to. +- Adapt `values/nginx-ingress-services/demo-values.yaml` with correct URLs +- Put your TLS cert and key into + `values/nginx-ingress-services/demo-secrets.yaml`. + +Install `metallb` (for more information see the +[docs](https://metallb.universe.tf)): + +```sh +helm upgrade --install --namespace metallb-system metallb wire/metallb \ + -f values/metallb/demo-values.yaml \ + --wait --timeout 1800 +``` + +Install `nginx-ingress-[controller,services]`: + +:: +: helm upgrade --install --namespace demo demo-nginx-ingress-controller wire/nginx-ingress-controller + + : --wait + + helm upgrade --install --namespace demo demo-nginx-ingress-services wire/nginx-ingress-services + + : -f values/nginx-ingress-services/demo-values.yaml -f values/nginx-ingress-services/demo-secrets.yaml --wait + +Now, create DNS records for the URLs configured above. + +## Load Balancer on cloud-provider + +### AWS + +[Upload the required +certificates](https://aws.amazon.com/premiumsupport/knowledge-center/import-ssl-certificate-to-iam/). +Create and configure `values/aws-ingress/demo-values.yaml` from the +examples. + +``` +helm upgrade --install --namespace demo demo-aws-ingress wire/aws-ingress \ + -f values/aws-ingress/demo-values.yaml \ + --wait +``` + +To give your load balancers public DNS names, create and edit +`values/external-dns/demo-values.yaml`, then run +[external-dns](https://github.com/helm/charts/tree/master/stable/external-dns): + +``` +helm repo update +helm upgrade --install --namespace demo demo-external-dns stable/external-dns \ + --version 1.7.3 \ + -f values/external-dns/demo-values.yaml \ + --wait +``` + +Things to note about external-dns: + +- There can only be a single external-dns chart installed (one per + kubernetes cluster, not one per namespace). So if you already have + one running for another namespace you probably don't need to do + anything. +- You have to add the appropriate IAM permissions to your cluster (see + the + [README](https://github.com/helm/charts/tree/master/stable/external-dns)). +- Alternatively, use the AWS route53 console. + +### Other cloud providers + +This information is not yet available. If you'd like to contribute by +adding this information for your cloud provider, feel free to read the +[contributing guidelines](https://github.com/wireapp/wire-server-deploy/blob/master/CONTRIBUTING.md) and open a PR. + +## Real AWS services + +**Assumptions**: + +- You installed kubernetes and wire-server on AWS + +**Provides**: + +- Better availability guarantees and possibly better functionality of + AWS services such as SQS and dynamoDB. +- You can use ELBs in front of nginz for higher availability. +- instead of using a smtp server and connect with SMTP, you may use + SES. See configuration of brig and the `useSES` toggle. + +**You need**: + +- An AWS account + +**How to configure**: + +- Instead of using fake-aws charts, you need to set up the respective + services in your account, create queues, tables etc. Have a look at + the fake-aws-\* charts; you'll need to replicate a similar setup. + + - Once real AWS resources are created, adapt the configuration in + the values and secrets files for wire-server to use real endpoints + and real AWS keys. Look for comments including + `if using real AWS`. + +- Creating AWS resources in a way that is easy to create and delete + could be done using either [terraform](https://www.terraform.io/) + or [pulumi](https://pulumi.io/). If you'd like to contribute by + creating such automation, feel free to read the [contributing + guidelines](https://github.com/wireapp/wire-server-deploy/blob/master/CONTRIBUTING.md) and open a PR. + +## Persistence and high-availability + +Currently, due to the way kubernetes and cassandra +[interact](https://github.com/kubernetes/kubernetes/issues/28969), +cassandra cannot reliably be installed on kubernetes. Some people have +tried, e.g. [this +project](https://github.com/instaclustr/cassandra-operator) though at +the time of writing (Nov 2018), this does not yet work as advertised. We +recommend therefore to install cassandra, (possibly also elasticsearch +and redis) separately, i.e. outside of kubernetes (using 3 nodes each). + +For further higher-availability: + +- scale your kubernetes cluster to have separate etcd and master nodes + (3 nodes each) +- use 3 instead of 1 replica of each wire-server chart + +## Security + +For a production deployment, you should, as a minimum: + +- Ensure traffic between kubernetes nodes, etcd and databases are + confined to a private network +- Ensure kubernetes API is unreachable from the public internet (e.g. + put behind VPN/bastion host or restrict IP range) to prevent + [kubernetes + vulnerabilities](https://www.cvedetails.com/vulnerability-list/vendor_id-15867/product_id-34016/Kubernetes-Kubernetes.html) + from affecting you +- Ensure your operating systems get security updates automatically +- Restrict ssh access / harden sshd configuration +- Ensure no other pods with public access than the main ingress are + deployed on your cluster, since, in the current setup, pods have + access to etcd values (and thus any secrets stored there, including + secrets from other pods) +- Ensure developers encrypt any secrets.yaml files + +Additionally, you may wish to build, sign, and host your own docker +images to have increased confidence in those images. We haved "signed +container images" on our roadmap. + +## Sign up with a phone number (Sending SMS) + +**Provides**: + +- Registering accounts with a phone number + +**You need**: + +- a [Nexmo](https://www.nexmo.com/) account +- a [Twilio](https://www.twilio.com/) account + +**How to configure**: + +See the `brig` chart for configuration. + +(rd-party-proxying)= + +## 3rd-party proxying + +You need Giphy/Google/Spotify/Soundcloud API keys (if you want to +support previews by proxying these services) + +See the `proxy` chart for configuration. + +## Routing traffic to other namespaces via nginz + +If you have some components running in namespaces different from nginz. For +instance, the billing service (`ibis`) could be deployed to a separate +namespace, say `integrations`. But it still needs to get traffic via +`nginz`. When this is needed, the helm config can be adjusted like this: + +```yaml +# in your wire-server/values.yaml overrides: +nginz: + nginx_conf: + upstream_namespace: + ibis: integrations +``` + +## Marking an installation as self-hosted + +In case your wire installation is self-hosted (on-premise, demo installs), it needs to be aware that it is through a configuration option. As of release chart 4.15.0, `"true"` is the default behavior, and nothing needs to be done. + +If that option is not set, team-settings will prompt users about "wire for free" and associated functions. + +With that option set, all payment related functionality is disabled. + +The option is `IS_SELF_HOSTED`, and you set it in your `values.yaml` file (originally a copy of `prod-values.example.yaml` found in `wire-server-deploy/values/wire-server/`). + +In case of a demo install, replace `prod` with `demo`. + +First set the option under the `team-settings` section, `envVars` sub-section: + +```yaml +# NOTE: Only relevant if you want team-settings +team-settings: + envVars: + IS_SELF_HOSTED: "true" +``` + +Second, also set the option under the `account-pages` section: + +```yaml +# NOTE: Only relevant if you want account-pages +account-pages: + envVars: + IS_SELF_HOSTED: "true" +``` + +(auth-cookie-config)= + +## Configuring authentication cookie throttling + +Authentication cookies and the related throttling mechanism is described in the *Client API documentation*: +{ref}`login-cookies` + +The maximum number of cookies per account and type is defined by the brig option +`setUserCookieLimit`. Its default is `32`. + +Throttling is configured by the brig option `setUserCookieThrottle`. It is an +object that contains two fields: + +`stdDev` + +: The minimal standard deviation of cookie creation timestamps in + Seconds. (Default: `3000`, + [Wikipedia: Standard deviation](https://en.wikipedia.org/wiki/Standard_deviation)) + +`retryAfter` + +: Wait time in Seconds when `stdDev` is violated. (Default: `86400`) + +The default values are fine for most use cases. (Generally, you don't have to +configure them for your installation.) + +Condensed example: + +```yaml +brig: + optSettings: + setUserCookieLimit: 32 + setUserCookieThrottle: + stdDev: 3000 + retryAfter: 86400 +``` + +## Configuring searchability + +You can configure how search is limited or not based on user membership in a given team. + +There are two types of searches based on the direction of search: + +- **Inbound** searches mean that somebody is searching for you. Configuring the inbound search visibility means that you (or some admin) can configure whether others can find you or not. +- **Outbound** searches mean that you are searching for somebody. Configuring the outbound search visibility means that some admin can configure whether you can find other users or not. + +There are different types of matches: + +- **Exact handle** search means that the user is found only if the search query is exactly the user handle (e.g. searching for `mc` will find `@mc` but not `@mccaine`). This search returns zero or one results. +- **Full text** search means that the user is found if the search query contains some subset of the user display name and handle. (e.g. the query `mar` will find `Marco C`, `Omar`, `@amaro`) + +### Searching users on the same backend + +Search visibility is controlled by three parameters on the backend: + +- A team outbound configuration flag, `TeamSearchVisibility` with possible values `SearchVisibilityStandard`, `SearchVisibilityNoNameOutsideTeam` + + - `SearchVisibilityStandard` means that the user can find other people outside of the team, if the searched-person inbound search allows it + - `SearchVisibilityNoNameOutsideTeam` means that the user can not find any user outside the team by full text search (but exact handle search still works) + +- A team inbound configuration flag, `SearchVisibilityInbound` with possible values `SearchableByOwnTeam`, `SearchableByAllTeams` + + - `SearchableByOwnTeam` means that the user can be found only by users in their own team. + - `SearchableByAllTeams` means that the user can be found by users in any/all teams. + +- A server configuration flag `searchSameTeamOnly` with possible values true, false. + + - `Note`: For the same backend, this affects inbound and outbound searches (simply because all teams will be subject to this behavior) + - Setting this to `true` means that the all teams on that backend can only find users that belong to their team + +These flag are set on the backend and the clients do not need to be aware of them. + +The flags will influence the behavior of the search API endpoint; clients will only need to parse the results, that are already filtered for them by the backend. + +#### Table of possible outcomes + +```{eval-rst} ++------------------------------------+---------------------------------+------------------------------------+------------------------------------------+-------------------------------------------+----------------------------------+--------------------------------------+ +| Is search-er (`uA`) in team (tA)? | Is search-ed (`uB`) in a team? | Backend flag `searchSameTeamOnly` | Team `tA`'s flag `TeamSearchVisibility` | Team tB's flag `SearchVisibilityInbound` | Result of exact search for `uB` | Result of full-text search for `uB` | ++====================================+=================================+====================================+==========================================+===========================================+==================================+======================================+ +| **Search within the same team** | ++------------------------------------+---------------------------------+------------------------------------+------------------------------------------+-------------------------------------------+----------------------------------+--------------------------------------+ +| Yes, `tA` | Yes, the same team `tA` | Irrelevant | Irrelevant | Irrelevant | Found | Found | ++------------------------------------+---------------------------------+------------------------------------+------------------------------------------+-------------------------------------------+----------------------------------+--------------------------------------+ +| **Outbound search unrestricted** | ++------------------------------------+---------------------------------+------------------------------------+------------------------------------------+-------------------------------------------+----------------------------------+--------------------------------------+ +| Yes, `tA` | Yes, another team tB | false | `SearchVisibilityStandard` | `SearchableByAllTeams` | Found | Found | ++------------------------------------+---------------------------------+------------------------------------+------------------------------------------+-------------------------------------------+----------------------------------+--------------------------------------+ +| Yes, `tA` | Yes, another team tB | false | `SearchVisibilityStandard` | `SearchableByOwnTeam` | Found | Not found | ++------------------------------------+---------------------------------+------------------------------------+------------------------------------------+-------------------------------------------+----------------------------------+--------------------------------------+ +| **Outbound search restricted** | ++------------------------------------+---------------------------------+------------------------------------+------------------------------------------+-------------------------------------------+----------------------------------+--------------------------------------+ +| Yes, `tA` | Yes, another team tB | true | Irrelevant | Irrelevant | Not found | Not found | ++------------------------------------+---------------------------------+------------------------------------+------------------------------------------+-------------------------------------------+----------------------------------+--------------------------------------+ +| Yes, `tA` | Yes, another team tB | false | `SearchVisibilityNoNameOutsideTeam` | Irrelevant | Found | Not found | ++------------------------------------+---------------------------------+------------------------------------+------------------------------------------+-------------------------------------------+----------------------------------+--------------------------------------+ +| Yes, `tA` | No | false | `SearchVisibilityNoNameOutsideTeam` | There’s no team B | Found | Not found | ++------------------------------------+---------------------------------+------------------------------------+------------------------------------------+-------------------------------------------+----------------------------------+--------------------------------------+ +``` + +#### Changing the configuration on the server + +To change the `searchSameTeamOnly` setting on the backend, edit the `values.yaml.gotmpl` file for the wire-server chart at this nested level of the configuration: + +```yaml +brig: + # ... + config: + # ... + optSettings: + # ... + setSearchSameTeamOnly: true +``` + +If `setSearchSameTeamOnly` is set to `true` then `TeamSearchVisibility` is forced be in the `SearchVisibilityNoNameOutsideTeam` setting for all teams. + +#### Changing the default configuration for all teams + +If `setSearchSameTeamOnly` is set to `false` (or missing from the configuration) then the default value `TeamSearchVisibility` can be configured at this level of the configuration of the `value.yaml.gotmpl` file of the wire-server chart: + +```yaml +galley: + #... + config: + #... + settings: + #... + featureFlags: + #... + teamSearchVisibility: enabled-by-default +``` + +This default value applies to all teams for which no explicit configuration of the `TeamSearchVisibility` has been set. + +### Searching users on another (federated) backend + +For federated search the table above does not apply, see following table. + +```{note} +Incoming federated searches (i.e. searches from one backend to another) are considered always as being performed from a team user, even if they are performed from a personal user. + +This is because the incoming search request does not carry the information whether the user performing the search was in a team or not. + +So we have to make one assumption, and we assume that they were in a team. +``` + +Allowing search is done at the backend configuration level by the sysadmin: + +- Outbound search restrictions (`searchSameTeamOnly`, `TeamSearchVisibility`) do not apply to federated searches + +- A configuration setting `FederatedUserSearchPolicy` per federating domain with these possible values: + + - `no_search` The federating backend is not allowed to search any users (either by exact handle or full-text). + - `exact_handle_search` The federating backend may only search by exact handle + - `full_search` The federating backend may search users by full text search on display name and handle. The search search results are additionally affected by `SearchVisibilityInbound` setting of each team on the backend. + +- The `SearchVisibilityInbound` setting applies. Since the default value for teams is `SearchableByOwnTeam` this means that for a team to be full-text searchable by users on a federating backend both + + - `FederatedUserSearchPolicy` needs to be set to to full_search for the federating backend + - Any team that wants to be full-text searchable needs to be set to `SearchableByAllTeams` + +The configuration value `FederatedUserSearchPolicy` is per federated domain, e.g. in the values of the wire-server chart: + +```yaml +brig: + config: + optSettings: + setFederationDomainConfigs: + - domain: a.example.com + search_policy: no_search + - domain: a.example.com + search_policy: full_search +``` + +#### Table of possible outcomes + +In the following table, user `uA` on backend A is searching for user `uB` on team `tB` on backend B. + +Any of the flags set for searching users on the same backend are ignored. + +It’s worth nothing that if two users are on two separate backend, they are also guaranteed to be on two separate teams, as teams can not spread across backends. + +| Who is searching | Backend B flag `FederatedUserSearchPolicy` | Team `tB`'s flag `SearchVisibilityInbound` | Result of exact search for `uB` | Result of full-text search for `uB` | +| ---------------------- | ------------------------------------------ | ------------------------------------------ | ------------------------------- | ----------------------------------- | +| user `uA` on backend A | `no_search` | Irrelevant | Not found | Not found | +| user `uA` on backend A | `exact_handle_search` | Irrelevant | Found | Not found | +| user `uA` on backend A | `full_search` | SearchableByOwnTeam | Found | Not found | +| user `uA` on backend A | `full_search` | SearchableByAllTeams | Found | Found | + +### Changing the settings for a given team + +If you need to change searchabilility for a specific team (rather than the entire backend, as above), you need to make specific calls to the API. + +#### Team searchVisibility + +The team flag `searchVisibility` affects the outbound search of user searches. + +If it is set to `no-name-outside-team` for a team then all users of that team will no longer be able to find users that are not part of their team when searching. + +This also includes finding other users by by providing their exact handle. By default it is set to `standard`, which doesn't put any additional restrictions to outbound searches. + +The setting can be changed via endpoint (for more details on how to make the API calls with `curl`, read further): + +``` +GET /teams/{tid}/search-visibility + -- Shows the current TeamSearchVisibility value for the given team + +PUT /teams/{tid}/search-visibility + -- Set specific search visibility for the team + +pull-down-menu "body": + "standard" + "no-name-outside-team" +``` + +The team feature flag `teamSearchVisibility` determines whether it is allowed to change the `searchVisibility` setting or not. + +The default is `disabled-by-default`. + +```{note} +Whenever this feature setting is disabled the `searchVisibility` will be reset to standard. +``` + +The default setting that applies to all teams on the instance can be defined at configuration + +```yaml +settings: + featureFlags: + teamSearchVisibility: disabled-by-default # or enabled-by-default +``` + +#### TeamFeature searchVisibilityInbound + +The team feature flag `searchVisibilityInbound` affects if the team's users are searchable by users from other teams. + +The default setting is `searchable-by-own-team` which hides users from search results by users from other teams. + +If it is set to `searchable-by-all-teams` then users of this team may be included in the results of search queries by other users. + +```{note} +The configuration of this flag does not affect search results when the search query matches the handle exactly. + +If the handle is provdided then any user on the instance can find users. +``` + +This team feature flag can only by toggled by site-administrators with direct access to the galley instance (for more details on how to make the API calls with `curl`, read further): + +``` +PUT /i/teams/{tid}/features/search-visibility-inbound +``` + +With JSON body: + +```json +{"status": "enabled"} +``` + +or + +```json +{"status": "disabled"} +``` + +Where `enabled` is equivalent to `searchable-by-all-teams` and `disabled` is equivalent to `searchable-by-own-team`. + +The default setting that applies to all teams on the instance can be defined at configuration. + +```yaml +searchVisibilityInbound: + defaults: + status: enabled # OR disabled +``` + +Individual teams can overwrite the default setting with API calls as per above. + +#### Making the API calls + +To make API calls to set an explicit configuration for\` TeamSearchVisibilityInbound\` per team, you first need to know the Team ID, which can be found in the team settings app. + +It is an `UUID` which has format like this `dcbedf9a-af2a-4f43-9fd5-525953a919e1`. + +In the following we will be using this Team ID as an example, please replace it with your own team id. + +Next find the name of a `galley` pod by looking at the output of running this command: + +```sh +kubectl -n wire get pods +``` + +The output will look something like this: + +``` +... +galley-5f4787fdc7-9l64n ... +galley-migrate-data-lzz5j ... +... +``` + +Select any of the galley pods, for example we will use `galley-5f4787fdc7-9l64n`. + +Next, set up a port-forwarding from your local machine's port `9000` to the galley's port `8080` by running: + +```sh +kubectl port-forward -n wire galley-5f4787fdc7-9l64n 9000:8080 +``` + +Keep this command running until the end of these instuctions. + +Please run the following commands in a seperate terminal while keeping the terminal which establishes the port-forwarding open. + +To see team's current setting run: + +```sh +curl -XGET http://localhost:9000/i/teams/dcbedf9a-af2a-4f43-9fd5-525953a919e1/features/searchVisibilityInbound + +# {"lockStatus":"unlocked","status":"disabled"} +``` + +Where `disabled` corresponds to `SearchableByOwnTeam` and enabled corresponds to `SearchableByAllTeams`. + +To change the `TeamSearchVisibilityInbound` to `SearchableByAllTeams` for the team run: + +```sh +curl -XPUT -H 'Content-Type: application/json' -d "{\"status\": \"enabled\"}" http://localhost:9000/i/teams/dcbedf9a-af2a-4f43-9fd5-525953a919e1/features/searchVisibilityInbound +``` + +To change the TeamSearchVisibilityInbound to SearchableByOwnTeam for the team run: + +```sh +curl -XPUT -H 'Content-Type: application/json' -d "{\"status\": \"disabled\"}" http://localhost:9000/i/teams/dcbedf9a-af2a-4f43-9fd5-525953a919e1/features/searchVisibilityInbound +``` + +## Configuring classified domains + +As a backend administrator, if you want to control which other backends (identified by their domain) are "classified", + +change the following `galley` configuration in the `value.yaml.gotmpl` file of the wire-server chart: + +```yaml +galley: + replicaCount: 1 + config: + ... + featureFlags: + ... + classifiedDomains: + status: enabled + config: + domains: ["domain-that-is-classified.link"] + ... +``` + +This is not only a `backend` configuration, but also a `team` configuration/feature. + +This means that different combinations of configurations will have different results. + +Here is a table to navigate the possible configurations: + +| Backend Config enabled/disabled | Backend Config Domains | Team Config enabled/disabled | Team Config Domains | User's view | +| ------------------------------- | ---------------------------------------------- | ---------------------------- | ----------------------- | -------------------------------- | +| Enabled | \[domain1.example.com\] | Not configured | Not configured | Enabled, \[domain1.example.com\] | +| Enabled | \[domain1.example.com\]\[domain1.example.com\] | Enabled | Not configured | Enabled, \[domain1.example.com\] | +| Enabled | \[domain1.example.com\] | Enabled | \[domain2.example.com\] | Enabled, Undefined | +| Enabled | \[domain1.example.com\] | Disabled | Anything | Undefined | +| Disabled | Anything | Not configured | Not configured | Disabled, no domains | +| Disabled | Anything | Enabled | \[domain2.example.com\] | Undefined | + +The table assumes the following: + +- When backend level config says that this feature is enabled, it is illegal to not specify domains at the backend level. +- When backend level config says that this feature is disabled, the list of domains is ignored. +- When team level feature is disabled, the accompanying domains are ignored. + +## S3 Addressing Style + +S3 can either by addressed in path style, i.e. +`https:////`, or vhost style, i.e. +`https://./`. AWS's S3 offering has deprecated +path style addressing for S3 and completely disabled it for buckets created +after 30 Sep 2020: + + +However other object storage providers (specially self-deployed ones like MinIO) +may not support vhost style addressing yet (or ever?). Users of such buckets +should configure this option to "path": + +```yaml +cargohold: + aws: + s3AddressingStyle: path +``` + +Installations using S3 service provided by AWS, should use "auto", this option +will ensure that vhost style is only used when it is possible to construct a +valid hostname from the bucket name and the bucket name doesn't contain a '.'. +Having a '.' in the bucket name causes TLS validation to fail, hence it is not +used by default: + +```yaml +cargohold: + aws: + s3AddressingStyle: auto +``` + +Using "virtual" as an option is only useful in situations where vhost style +addressing must be used even if it is not possible to construct a valid hostname +from the bucket name or the S3 service provider can ensure correct certificate +is issued for bucket which contain one or more '.'s in the name: + +```yaml +cargohold: + aws: + s3AddressingStyle: virtual +``` + +When this option is unspecified, wire-server defaults to path style addressing +to ensure smooth transition for older deployments. diff --git a/docs/src/developer/developer/how-to.md b/docs/src/developer/developer/how-to.md index 0ed606399b..14c0e278d9 100644 --- a/docs/src/developer/developer/how-to.md +++ b/docs/src/developer/developer/how-to.md @@ -2,6 +2,13 @@ The following assume you have a working developer environment with all the dependencies listed in [./dependencies.md](./dependencies.md) available to you. +If you want to deploy to the CI kubernetes cluster (how-tos below), you need to set the `KUBECONFIG` env var, where `$cailleach_repo` is replaced by your local checkout of the `cailleach` repository. +``` +export KUBECONFIG=$cailleach_repo/environments/kube-ci/kubeconfig.dec +``` +Check that this file exists by running `ls $KUBECONFIG`. + + ## How to look at the swagger docs / UI locally Terminal 1: diff --git a/docs/src/developer/developer/index.md b/docs/src/developer/developer/index.md new file mode 100644 index 0000000000..77e35760cf --- /dev/null +++ b/docs/src/developer/developer/index.md @@ -0,0 +1,10 @@ +# Developer + +```{toctree} +:caption: 'Contents:' +:glob: true +:numbered: true +:titlesonly: true + +** +``` diff --git a/docs/src/developer/developer/index.rst b/docs/src/developer/developer/index.rst deleted file mode 100644 index a8fefaa770..0000000000 --- a/docs/src/developer/developer/index.rst +++ /dev/null @@ -1,10 +0,0 @@ -Developer -========= - -.. toctree:: - :titlesonly: - :numbered: - :caption: Contents: - :glob: - - ** diff --git a/docs/src/developer/index.rst b/docs/src/developer/index.md similarity index 52% rename from docs/src/developer/index.rst rename to docs/src/developer/index.md index b48dbecae0..59cf4fd92e 100644 --- a/docs/src/developer/index.rst +++ b/docs/src/developer/index.md @@ -1,19 +1,19 @@ -Notes for developers -==================== +# Notes for developers -If you are an on-premise operator (administrating your own self-hosted installation of wire-server), you may want to go back to `docs.wire.com `_ and ignore this section of the docs. +If you are an on-premise operator (administrating your own self-hosted installation of wire-server), you may want to go back to [docs.wire.com](https://docs.wire.com/) and ignore this section of the docs. -If you are a wire end-user, please check out our `support pages `_. +If you are a wire end-user, please check out our [support pages](https://support.wire.com/). What you need to know as a user of the Wire backend: concepts, features, and API. We want to keep these up to date. They could benefit from some re-ordering, and they are far from complete, but we hope they will still help you. -.. toctree:: - :titlesonly: - :caption: Contents: - :glob: +```{toctree} +:caption: 'Contents:' +:glob: true +:titlesonly: true - developer/index.rst - reference/index.rst +developer/index.rst +reference/index.rst +``` diff --git a/docs/src/developer/reference/config-options.md b/docs/src/developer/reference/config-options.md index f48816fdf8..2828beb7cd 100644 --- a/docs/src/developer/reference/config-options.md +++ b/docs/src/developer/reference/config-options.md @@ -638,3 +638,37 @@ optSettings: # ... setOAuthEnabled: [true|false] ``` + +#### Disabling API versions + +It is possible to disable one ore more API versions. When an API version is disabled it won't be advertised on the `GET /api-version` endpoint, neither in the `supported`, nor in the `development` section. Requests made to any endpoint of a disabled API version will result in the same error response as a request made to an API version that does not exist. + +Each of the services brig, cannon, cargohold, galley, gundeck, proxy, spar should to be configured with the same set of disable API versions in each service's values.yaml config files. + + +For example to disable API version v3, you need to configure: + +``` +# brig's values.yaml +config.optSettings.setDisabledAPIVersions: [ 3 ] + +# cannon's values.yaml +config.disabledAPIVersions: [ 3 ] + +# cargohold's values.yaml +config.settings.disabledAPIVersions: [ 3 ] + +# galley's values.yaml +config.settings.disabledAPIVersions: [ 3 ] + +# gundecks' values.yaml +config.disabledAPIVersions: [ 3 ] + +# proxy's values.yaml +config.disabledAPIVersions: [ 3 ] + +# spar's values.yaml +config.disabledAPIVersions: [ 3 ] +``` + +The default setting is that no API version is disabled. diff --git a/docs/src/developer/reference/index.md b/docs/src/developer/reference/index.md new file mode 100644 index 0000000000..4b6e82f195 --- /dev/null +++ b/docs/src/developer/reference/index.md @@ -0,0 +1,10 @@ +# Reference + +```{toctree} +:caption: 'Contents:' +:glob: true +:numbered: true +:titlesonly: true + +** +``` diff --git a/docs/src/developer/reference/index.rst b/docs/src/developer/reference/index.rst deleted file mode 100644 index 1eb9feedba..0000000000 --- a/docs/src/developer/reference/index.rst +++ /dev/null @@ -1,10 +0,0 @@ -Reference -========= - -.. toctree:: - :titlesonly: - :numbered: - :caption: Contents: - :glob: - - ** diff --git a/docs/src/developer/reference/spar-braindump.md b/docs/src/developer/reference/spar-braindump.md index f32532108b..dcee5847e9 100644 --- a/docs/src/developer/reference/spar-braindump.md +++ b/docs/src/developer/reference/spar-braindump.md @@ -113,7 +113,7 @@ export IDP_ID=... Copy the new metadata file to one of your spar instances. -Ssh into it. If you can't, [the sso docs](../../understand/single-sign-on/main.rst) explain how you can create a +Ssh into it. If you can't, [the sso docs](../../how-to/single-sign-on/understand/main.rst) explain how you can create a bearer token if you have the admin's login credentials. If you follow that approach, you need to replace all mentions of `-H'Z-User ...'` with `-H'Authorization: Bearer ...'` in the following, and you won't need diff --git a/docs/src/how-to/administrate/cassandra.md b/docs/src/how-to/administrate/cassandra.md new file mode 100644 index 0000000000..c75439d626 --- /dev/null +++ b/docs/src/how-to/administrate/cassandra.md @@ -0,0 +1,63 @@ +# Cassandra + +```{eval-rst} +.. include:: includes/intro.rst +``` + +This section only covers the bare minimum, for more information, see the [cassandra +documentation](https://cassandra.apache.org/doc/latest/) + +(check-the-health-of-a-cassandra-node)= + +## Check the health of a Cassandra node + +To check the health of a Cassandra node, run the following command: + +```sh +ssh /opt/cassandra/bin/nodetool status +``` + +or if you are running a newer version of wire-server (altough it should be backwards compatibile) + +```sh +ssh /opt/cassandra/bin/nodetool -h ::FFFF:127.0.0.1 status +``` + +You should see a list of nodes like this: + +```sh +Datacenter: datacenter1 +======================= +Status=Up/Down +|/ State=Normal/Leaving/Joining/Moving +-- Address Load Tokens Owns (effective) Host ID Rack +UN 192.168.220.13 9.51MiB 256 100.0% 3dba71c8-eea7-4e35-8f35-4386e7944894 rack1 +UN 192.168.220.23 9.53MiB 256 100.0% 3af56f1f-7685-4b5b-b73f-efdaa371e96e rack1 +UN 192.168.220.33 9.55MiB 256 100.0% RANDOMLY-MADE-UUID-GOES-INTHISPLACE! rack1 +``` + +A `UN` at the begginng of the line, refers to a node that is `Up` and `Normal`. + +## How to inspect tables and data manually + +```sh +cqlsh +# from the cqlsh shell +describe keyspaces +use ; +describe tables; +select * from WHERE = LIMIT 10; +``` + +## How to rolling-restart a cassandra cluster + +For maintenance you may need to restart the cluster. + +On each server one by one: + +1. check your cluster is healthy: `nodetool status` or `nodetool -h ::FFFF:127.0.0.1 status` (in newer versions) +2. `nodetool drain && systemctl stop cassandra` (to stop accepting writes and flush data to disk; then stop the process) +3. do any operation you need, if any +4. Start the cassandra daemon process: `systemctl start cassandra` +5. Wait for your cluster to be healthy again. +6. Do the same on the next server. diff --git a/docs/src/how-to/administrate/cassandra.rst b/docs/src/how-to/administrate/cassandra.rst deleted file mode 100644 index 180a8f2a8c..0000000000 --- a/docs/src/how-to/administrate/cassandra.rst +++ /dev/null @@ -1,65 +0,0 @@ -Cassandra --------------------------- - -.. include:: includes/intro.rst - -This section only covers the bare minimum, for more information, see the `cassandra -documentation `__ - -Check the health of a Cassandra node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -To check the health of a Cassandra node, run the following command: - -.. code:: sh - - ssh /opt/cassandra/bin/nodetool status - -or if you are running a newer version of wire-server (altough it should be backwards compatibile) - -.. code:: sh - - ssh /opt/cassandra/bin/nodetool -h ::FFFF:127.0.0.1 status - -You should see a list of nodes like this: - -.. code:: sh - - Datacenter: datacenter1 - ======================= - Status=Up/Down - |/ State=Normal/Leaving/Joining/Moving - -- Address Load Tokens Owns (effective) Host ID Rack - UN 192.168.220.13 9.51MiB 256 100.0% 3dba71c8-eea7-4e35-8f35-4386e7944894 rack1 - UN 192.168.220.23 9.53MiB 256 100.0% 3af56f1f-7685-4b5b-b73f-efdaa371e96e rack1 - UN 192.168.220.33 9.55MiB 256 100.0% RANDOMLY-MADE-UUID-GOES-INTHISPLACE! rack1 - -A ``UN`` at the begginng of the line, refers to a node that is ``Up`` and ``Normal``. - -How to inspect tables and data manually -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. code:: sh - - cqlsh - # from the cqlsh shell - describe keyspaces - use ; - describe tables; - select * from WHERE = LIMIT 10; - -How to rolling-restart a cassandra cluster -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -For maintenance you may need to restart the cluster. - -On each server one by one: - -1. check your cluster is healthy: ``nodetool status`` or ``nodetool -h ::FFFF:127.0.0.1 status`` (in newer versions) -2. ``nodetool drain && systemctl stop cassandra`` (to stop accepting writes and flush data to disk; then stop the process) -3. do any operation you need, if any -4. Start the cassandra daemon process: ``systemctl start cassandra`` -5. Wait for your cluster to be healthy again. -6. Do the same on the next server. - - diff --git a/docs/src/how-to/administrate/elasticsearch.md b/docs/src/how-to/administrate/elasticsearch.md new file mode 100644 index 0000000000..f128a0c1d6 --- /dev/null +++ b/docs/src/how-to/administrate/elasticsearch.md @@ -0,0 +1,127 @@ +# Elasticsearch + +```{eval-rst} +.. include:: includes/intro.rst +``` + +For more information, see the [elasticsearch +documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html) + +(restart-elasticsearch)= + +## How to rolling-restart an elasticsearch cluster + +For maintenance you may need to restart the cluster. + +On each server one by one: + +1. check your cluster is healthy (see above) +2. stop shard allocation: + +```sh +ES_IP= +curl -sSf -XPUT http://localhost:9200/_cluster/settings -H 'Content-Type: application/json' -d "{ \"transient\" : {\"cluster.routing.allocation.exclude._ip\": \"$ES_IP\" }}"; echo; +``` + +You should expect some output like this: + +```sh +{"acknowledged":true,"persistent":{},"transient":{"cluster":{"routing":{"allocation":{"exclude":{"_ip":""}}}}}} +``` + +3. Stop the elasticsearch daemon process: `systemctl stop elasticsearch` +4. do any operation you need, if any +5. Start the elasticsearch daemon process: `systemctl start elasticsearch` +6. re-enable shard allocation: + +```sh +curl -sSf -XPUT http://localhost:9200/_cluster/settings -H 'Content-Type: application/json' -d "{ \"transient\" : {\"cluster.routing.allocation.exclude._ip\": null }}"; echo; +``` + +You should expect some output like this from the above command: + +```sh +{"acknowledged":true,"persistent":{},"transient":{}} +``` + +6. Wait for your cluster to be healthy again. +7. Do the same on the next server. + +## How to manually look into what is stored in elasticsearch + +See also the elasticsearch sections in {ref}`investigative-tasks`. + +(check-the-health-of-an-elasticsearch-node)= + +## Check the health of an elasticsearch node + +To check the health of an elasticsearch node, run the following command: + +```sh +ssh curl localhost:9200/_cat/health +``` + +You should see output looking like this: + +``` +1630250355 15:18:55 elasticsearch-directory green 3 3 17 6 0 0 0 - 100.0% +``` + +Here, the `green` denotes good node health, and the `3 3` denotes 3 running nodes. + +## Check cluster health + +This is the command to check the health of the entire cluster: + +```sh +ssh curl 'http://localhost:9200/_cluster/health?pretty' +``` + +## List cluster nodes + +This is the command to list the nodes in the cluster: + +```sh +ssh curl 'http://localhost:9200/_cat/nodes?v&h=id,ip,name' +``` + +## Troubleshooting + +Description: +**ES nodes ran out of disk space** and error message says: `"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"` + +Solution: + +1. Connect to the node: + +```sh +ssh +``` + +2. Clean up disk (e.g. `apt autoremove` on all nodes), then restart machines and/or the elasticsearch process + +```sh +sudo apt autoremove +sudo reboot +``` + +As always make sure you {ref}`check the health of the process `. before and after the reboot. + +3. Get the elastichsearch cluster out of *read-only* mode, run: + +```sh +curl -X PUT -H 'Content-Type: application/json' http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}' +``` + +4. Trigger reindexing: From a kubernetes machine, in one terminal: + +```sh +# The following depends on your namespace where you installed wire-server. By default the namespace is called 'wire'. +kubectl --namespace wire port-forward svc/brig 9999:8080 +``` + +And in a second terminal trigger the reindex: + +```sh +curl -v -X POST localhost:9999/i/index/reindex +``` diff --git a/docs/src/how-to/administrate/elasticsearch.rst b/docs/src/how-to/administrate/elasticsearch.rst deleted file mode 100644 index 3a101a7645..0000000000 --- a/docs/src/how-to/administrate/elasticsearch.rst +++ /dev/null @@ -1,134 +0,0 @@ -Elasticsearch ------------------------------- - -.. include:: includes/intro.rst - -For more information, see the `elasticsearch -documentation `__ - - -.. _restart-elasticsearch: - -How to rolling-restart an elasticsearch cluster -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -For maintenance you may need to restart the cluster. - -On each server one by one: - -1. check your cluster is healthy (see above) -2. stop shard allocation: - -.. code:: sh - - ES_IP= - curl -sSf -XPUT http://localhost:9200/_cluster/settings -H 'Content-Type: application/json' -d "{ \"transient\" : {\"cluster.routing.allocation.exclude._ip\": \"$ES_IP\" }}"; echo; - -You should expect some output like this: - -.. code:: sh - - {"acknowledged":true,"persistent":{},"transient":{"cluster":{"routing":{"allocation":{"exclude":{"_ip":""}}}}}} - -3. Stop the elasticsearch daemon process: ``systemctl stop elasticsearch`` -4. do any operation you need, if any -5. Start the elasticsearch daemon process: ``systemctl start elasticsearch`` -6. re-enable shard allocation: - -.. code:: sh - - curl -sSf -XPUT http://localhost:9200/_cluster/settings -H 'Content-Type: application/json' -d "{ \"transient\" : {\"cluster.routing.allocation.exclude._ip\": null }}"; echo; - -You should expect some output like this from the above command: - -.. code:: sh - - {"acknowledged":true,"persistent":{},"transient":{}} - -6. Wait for your cluster to be healthy again. -7. Do the same on the next server. - -How to manually look into what is stored in elasticsearch -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -See also the elasticsearch sections in :ref:`investigative_tasks`. - - -Check the health of an elasticsearch node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -To check the health of an elasticsearch node, run the following command: - -.. code:: sh - - ssh curl localhost:9200/_cat/health - -You should see output looking like this: - -.. code:: - - 1630250355 15:18:55 elasticsearch-directory green 3 3 17 6 0 0 0 - 100.0% - -Here, the ``green`` denotes good node health, and the ``3 3`` denotes 3 running nodes. - -Check cluster health -~~~~~~~~~~~~~~~~~~~~ - -This is the command to check the health of the entire cluster: - -.. code:: sh - - ssh curl 'http://localhost:9200/_cluster/health?pretty' - - -List cluster nodes -~~~~~~~~~~~~~~~~~~ - -This is the command to list the nodes in the cluster: - -.. code:: sh - - ssh curl 'http://localhost:9200/_cat/nodes?v&h=id,ip,name' - - -Troubleshooting -~~~~~~~~~~~~~~~ - -Description: -**ES nodes ran out of disk space** and error message says: ``"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"`` - -Solution: - -1. Connect to the node: - -.. code:: sh - - ssh - -2. Clean up disk (e.g. ``apt autoremove`` on all nodes), then restart machines and/or the elasticsearch process - -.. code:: sh - - sudo apt autoremove - sudo reboot - -As always, and as explained in the `operations/procedures page `__, make sure you `check the health of the process `__. before and after the reboot. - -3. Get the elastichsearch cluster out of *read-only* mode, run: - -.. code:: sh - - curl -X PUT -H 'Content-Type: application/json' http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}' - -4. Trigger reindexing: From a kubernetes machine, in one terminal: - -.. code:: sh - - # The following depends on your namespace where you installed wire-server. By default the namespace is called 'wire'. - kubectl --namespace wire port-forward svc/brig 9999:8080 - -And in a second terminal trigger the reindex: - -.. code:: sh - - curl -v -X POST localhost:9999/i/index/reindex diff --git a/docs/src/how-to/administrate/etcd.md b/docs/src/how-to/administrate/etcd.md new file mode 100644 index 0000000000..a18c801f87 --- /dev/null +++ b/docs/src/how-to/administrate/etcd.md @@ -0,0 +1,261 @@ +# Etcd + +```{eval-rst} +.. include:: includes/intro.rst +``` + +This section only covers the bare minimum, for more information, see the [etcd documentation](https://etcd.io/) + +(how-to-see-cluster-health)= + +## How to see cluster health + +If the file `/usr/local/bin/etcd-health.sh` is available, you can run + +```sh +etcd-health.sh +``` + +which should produce an output similar to: + +``` +Cluster-Endpoints: https://127.0.0.1:2379 +cURL Command: curl -X GET https://127.0.0.1:2379/v2/members +member 7c37f7dc10558fae is healthy: got healthy result from https://10.10.1.11:2379 +member cca4e6f315097b3b is healthy: got healthy result from https://10.10.1.10:2379 +member e767162297c84b1e is healthy: got healthy result from https://10.10.1.12:2379 +cluster is healthy +``` + +If that helper file is not available, create it with the following contents: + +```bash +#!/usr/bin/env bash + +HOST=$(hostname) + +etcdctl --endpoints https://127.0.0.1:2379 --ca-file=/etc/ssl/etcd/ssl/ca.pem --cert-file=/etc/ssl/etcd/ssl/member-$HOST.pem --key-file=/etc/ssl/etcd/ssl/member-$HOST-key.pem --debug cluster-health +``` + +and then make it executable: `chmod +x /usr/local/bin/etcd-health.sh` + +## How to inspect tables and data manually + +```sh +TODO +``` + +(how-to-rolling-restart-an-etcd-cluster)= + +## How to rolling-restart an etcd cluster + +Etcd is a consistent and partition tolerant key-value store. This means that +Etcd nodes can be restarted (one by one) with no impact to the consistency of +data, but there might a small time in which the database can not process +writes. Etcd has a designated leader which decides ordering of events (and thus +writes) in the cluster. When the leader crashes, a leadership election takes +place. During the leadership election, the cluster might be briefly +unavailable for writes. Writes during this period are queued up until a new +leader is elected. Any writes that were happening during the crash of the +leader that were not acknowledged by the leader and the followers yet will be +'lost'. The client that performed this write will experience this as a write +timeout. (Source: ). Client +applications (like kubernetes) are expected to deal with this failure scenario +gracefully. + +Etcd can be restarted in a rolling fashion, by cleanly shutting down and +starting up etcd servers one by one. In Etcd 3.1 and up, when the leader is +cleanly shut down, it will hand over leadership gracefully to another node, +which will minimize the impact of write-availability as election time is +reduced. (Source : +) +Restarting follower nodes has no impact to availability. + +Etcd does load-balancing between servrvers on the client-side. This means that +if a server you were talking to is being restarted, etcd will transparently +redirect the request to another server. It's is thus safe to shut them down at +any point. + +Now to perform a rolling restart of the cluster, do the following steps: + +1. Check your cluster is healthy (see above) +2. Stop the process with `systemctl stop etcd` (this should be safe since etcd clients retry their operation if one endpoint becomes unavailable, see [this page](https://etcd.io/docs/v3.3.12/learning/client-architecture/)) +3. Do any operation you need, if any. Like rebooting +4. `systemctl start etcd` +5. Wait for your cluster to be healthy again. +6. Do the same on the next server. + +*For more details please refer to the official documentation:* [Replacing a failed etcd member](https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#replacing-a-failed-etcd-member) + +(etcd-backup-and-restore)= + +## Backing up and restoring + +Though as long as quorum is maintained in etcd there will be no dataloss, it is +still good to prepare for the worst. If a disaster takes out too many nodes, then +you might have to restore from an old backup. + +Luckily, etcd can take periodic snapshots of your cluster and these can be used +in cases of disaster recovery. Information about how to do snapshots and +restores can be found here: + + +*For more details please refer to the official documentation:* [Backing up an etcd cluster](https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster) + +## Troubleshooting + +### How to recover from a single unhealthy etcd node after virtual machine snapshot restore + +After restoring an etcd machine from an earlier snapshot of the machine disk, etcd members may become unable to join. + +Symptoms: That etcd process is unable to start and crashes, and other etcd nodes can't reach it: + +``` +failed to check the health of member e767162297c84b1e on https://10.10.1.12:2379: Get https://10.10.1.12:2379/health: dial tcp 10.10.1.12:2379: getsockopt: connection refused +member e767162297c84b1e is unreachable: [https://10.10.1.12:2379] are all unreachable +``` + +Logs from the crashing etcd: + +``` +(...) +Sep 25 09:27:05 node2 etcd[20288]: 2019-09-25 07:27:05.691409 I | raft: e767162297c84b1e [term: 28] received a MsgHeartbeat message with higher term from cca4e6f315097b3b [term: 30] +Sep 25 09:27:05 node2 etcd[20288]: 2019-09-25 07:27:05.691620 I | raft: e767162297c84b1e became follower at term 30 +Sep 25 09:27:05 node2 etcd[20288]: 2019-09-25 07:27:05.692423 C | raft: tocommit(16152654) is out of range [lastIndex(16061986)]. Was the raft log corrupted, truncated, or lost? +Sep 25 09:27:05 node2 etcd[20288]: panic: tocommit(16152654) is out of range [lastIndex(16061986)]. Was the raft log corrupted, truncated, or lost? +Sep 25 09:27:05 node2 etcd[20288]: goroutine 90 [running]: +(...) +``` + +Etcd will refuse nodes that run behind to join the cluster. If a node has +committed to a certain version of the raft log, it is expected not to jump back +in time after that. In this scenario, we turned an etcd server off, made a +snapshot of the virtual machine, brought it back online, and then restored the +snapshot. What went wrong is is that if you bring up a VM snapshot, it means +the etcd node will now have an older raft log than it had before; even though +it already gossiped to all other nodes that it has knowledge of newer entries. + +As a safety precaution, the other nodes will reject the node that is travelling +back in time, to avoid data corruption. A node could get corrupted for other +reasons as well. Perhaps a disk is faulty and is serving wrong data. Either +way, if you end up in a scenario where a node is unhealthy and will refuse to +rejoin the cluster, it is time to do some operations to get the cluster back in +a healthy state. + +It is not recommended to restore an etcd node from a vm snapshot, as that will +cause these kind of time-travelling behaviours which will make the node +unhealthy. To recover from this situation anyway, +I quote from the etcdv2 admin guide + +> If a member’s data directory is ever lost or corrupted then the user should +> remove the etcd member from the cluster using etcdctl tool. A user should +> avoid restarting an etcd member with a data directory from an out-of-date +> backup. Using an out-of-date data directory can lead to inconsistency as the +> member had agreed to store information via raft then re-joins saying it +> needs that information again. For maximum safety, if an etcd member suffers +> any sort of data corruption or loss, it must be removed from the cluster. +> Once removed the member can be re-added with an empty data directory. + +Note that this piece of documentation is from etcdv2 and not etcdv3. However +the etcdv3 docs describe a similar procedure here + + +The procedure to remove and add a member is documented here: + + +It is also documented in the kubernetes documentation: + + +So following the above guides step by step, we can recover our cluster to be +healthy again. + +First let us make sure our broken member is stopped by runnning this on `node`: + +```sh +systemctl stop etcd +``` + +Now from a healthy node, e.g. `node0` remove the broken node + +```sh +etcdctl3.sh member remove e767162297c84b1e +``` + +And we expect the output to be something like + +```sh +Member e767162297c84b1e removed from cluster 432c10551aa096af +``` + +By removing the member from the cluster, you signal the other nodes to not +expect it to come back with the right state. It will be considered dead and +removed from the peers. This will allow the node to come up with an empty data +directory and it not getting kicked out of the cluster. The cluster should now +be healthy, but only have 2 members, and so it is not to resistent to crashes +at the moment! As we can see if we run the health check from a healthy node. + +```sh +etcd-health.sh +``` + +And we expect only two nodes to be in the cluster: + +``` +Cluster-Endpoints: https://127.0.0.1:2379 +cURL Command: curl -X GET https://127.0.0.1:2379/v2/members +member 7c37f7dc10558fae is healthy: got healthy result from https://10.10.1.11:2379 +member cca4e6f315097b3b is healthy: got healthy result from https://10.10.1.10:2379 +cluster is healthy +``` + +Now from a healthy node, re-add the node you just removed. Make sure +to replace the IP in the snippet below with the IP of the node you just removed. + +```sh +etcdctl3.sh member add etcd_2 --peer-urls https://10.10.1.12:2380 +``` + +And it should report that it has been added: + +``` +Member e13b1d076b2f9344 added to cluster 432c10551aa096af + +ETCD_NAME="etcd_2" +ETCD_INITIAL_CLUSTER="etcd_1=https://10.10.1.11:2380,etcd_0=https://10.10.1.10:2380,etcd_2=https://10.10.1.12:2380" +ETCD_INITIAL_CLUSTER_STATE="existing" +``` + +it should now be in the list as "unstarted" instead of it not being in the list at all. + +```sh +etcdctl3.sh member list + + +7c37f7dc10558fae, started, etcd_1, https://10.10.1.11:2380, https://10.10.1.11:2379 +cca4e6f315097b3b, started, etcd_0, https://10.10.1.10:2380, https://10.10.1.10:2379 +e13b1d076b2f9344, unstarted, , https://10.10.1.12:2380, +``` + +Now on the broken node, remove the on-disk state, which was corrupted, and start etcd + +```sh +mv /var/lib/etcd /var/lib/etcd.bak +sudo systemctl start etcd +``` + +If we run the health check now, the cluster should report its healthy now again. + +```sh +etcd-health.sh +``` + +And indeed it outputs so: + +``` +Cluster-Endpoints: https://127.0.0.1:2379 +cURL Command: curl -X GET https://127.0.0.1:2379/v2/members +member 7c37f7dc10558fae is healthy: got healthy result from https://10.10.1.11:2379 +member cca4e6f315097b3b is healthy: got healthy result from https://10.10.1.10:2379 +member e13b1d076b2f9344 is healthy: got healthy result from https://10.10.1.12:2379 +cluster is healthy +``` diff --git a/docs/src/how-to/administrate/etcd.rst b/docs/src/how-to/administrate/etcd.rst deleted file mode 100644 index 47bce63d70..0000000000 --- a/docs/src/how-to/administrate/etcd.rst +++ /dev/null @@ -1,264 +0,0 @@ -Etcd --------------------------- - -.. include:: includes/intro.rst - -This section only covers the bare minimum, for more information, see the `etcd documentation `__ - -How to see cluster health -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -If the file `/usr/local/bin/etcd-health.sh` is available, you can run - -.. code:: sh - - etcd-health.sh - -which should produce an output similar to:: - - Cluster-Endpoints: https://127.0.0.1:2379 - cURL Command: curl -X GET https://127.0.0.1:2379/v2/members - member 7c37f7dc10558fae is healthy: got healthy result from https://10.10.1.11:2379 - member cca4e6f315097b3b is healthy: got healthy result from https://10.10.1.10:2379 - member e767162297c84b1e is healthy: got healthy result from https://10.10.1.12:2379 - cluster is healthy - -If that helper file is not available, create it with the following contents: - -.. code:: bash - - #!/usr/bin/env bash - - HOST=$(hostname) - - etcdctl --endpoints https://127.0.0.1:2379 --ca-file=/etc/ssl/etcd/ssl/ca.pem --cert-file=/etc/ssl/etcd/ssl/member-$HOST.pem --key-file=/etc/ssl/etcd/ssl/member-$HOST-key.pem --debug cluster-health - -and then make it executable: ``chmod +x /usr/local/bin/etcd-health.sh`` - -How to inspect tables and data manually -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. code:: sh - - TODO - - -.. _how-to-rolling-restart-an-etcd-cluster: - -How to rolling-restart an etcd cluster -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Etcd is a consistent and partition tolerant key-value store. This means that -Etcd nodes can be restarted (one by one) with no impact to the consistency of -data, but there might a small time in which the database can not process -writes. Etcd has a designated leader which decides ordering of events (and thus -writes) in the cluster. When the leader crashes, a leadership election takes -place. During the leadership election, the cluster might be briefly -unavailable for writes. Writes during this period are queued up until a new -leader is elected. Any writes that were happening during the crash of the -leader that were not acknowledged by the leader and the followers yet will be -'lost'. The client that performed this write will experience this as a write -timeout. (Source: https://etcd.io/docs/v3.4.0/op-guide/failures/). Client -applications (like kubernetes) are expected to deal with this failure scenario -gracefully. - -Etcd can be restarted in a rolling fashion, by cleanly shutting down and -starting up etcd servers one by one. In Etcd 3.1 and up, when the leader is -cleanly shut down, it will hand over leadership gracefully to another node, -which will minimize the impact of write-availability as election time is -reduced. (Source : -https://kubernetes.io/blog/2018/12/11/etcd-current-status-and-future-roadmap/) -Restarting follower nodes has no impact to availability. - -Etcd does load-balancing between servrvers on the client-side. This means that -if a server you were talking to is being restarted, etcd will transparently -redirect the request to another server. It's is thus safe to shut them down at -any point. - -Now to perform a rolling restart of the cluster, do the following steps: - -1. Check your cluster is healthy (see above) -2. Stop the process with ``systemctl stop etcd`` (this should be safe since etcd clients retry their operation if one endpoint becomes unavailable, see `this page `__) -3. Do any operation you need, if any. Like rebooting -4. ``systemctl start etcd`` -5. Wait for your cluster to be healthy again. -6. Do the same on the next server. - -*For more details please refer to the official documentation:* `Replacing a failed etcd member `__ - - -.. _etcd_backup-and-restore: - -Backing up and restoring -~~~~~~~~~~~~~~~~~~~~~~~~~ -Though as long as quorum is maintained in etcd there will be no dataloss, it is -still good to prepare for the worst. If a disaster takes out too many nodes, then -you might have to restore from an old backup. - -Luckily, etcd can take periodic snapshots of your cluster and these can be used -in cases of disaster recovery. Information about how to do snapshots and -restores can be found here: -https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/recovery.md - -*For more details please refer to the official documentation:* `Backing up an etcd cluster `__ - - -Troubleshooting -~~~~~~~~~~~~~~~~~~~~~~~~~~ - - -How to recover from a single unhealthy etcd node after virtual machine snapshot restore -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -After restoring an etcd machine from an earlier snapshot of the machine disk, etcd members may become unable to join. - -Symptoms: That etcd process is unable to start and crashes, and other etcd nodes can't reach it:: - - failed to check the health of member e767162297c84b1e on https://10.10.1.12:2379: Get https://10.10.1.12:2379/health: dial tcp 10.10.1.12:2379: getsockopt: connection refused - member e767162297c84b1e is unreachable: [https://10.10.1.12:2379] are all unreachable - -Logs from the crashing etcd:: - - (...) - Sep 25 09:27:05 node2 etcd[20288]: 2019-09-25 07:27:05.691409 I | raft: e767162297c84b1e [term: 28] received a MsgHeartbeat message with higher term from cca4e6f315097b3b [term: 30] - Sep 25 09:27:05 node2 etcd[20288]: 2019-09-25 07:27:05.691620 I | raft: e767162297c84b1e became follower at term 30 - Sep 25 09:27:05 node2 etcd[20288]: 2019-09-25 07:27:05.692423 C | raft: tocommit(16152654) is out of range [lastIndex(16061986)]. Was the raft log corrupted, truncated, or lost? - Sep 25 09:27:05 node2 etcd[20288]: panic: tocommit(16152654) is out of range [lastIndex(16061986)]. Was the raft log corrupted, truncated, or lost? - Sep 25 09:27:05 node2 etcd[20288]: goroutine 90 [running]: - (...) - - -Etcd will refuse nodes that run behind to join the cluster. If a node has -committed to a certain version of the raft log, it is expected not to jump back -in time after that. In this scenario, we turned an etcd server off, made a -snapshot of the virtual machine, brought it back online, and then restored the -snapshot. What went wrong is is that if you bring up a VM snapshot, it means -the etcd node will now have an older raft log than it had before; even though -it already gossiped to all other nodes that it has knowledge of newer entries. - -As a safety precaution, the other nodes will reject the node that is travelling -back in time, to avoid data corruption. A node could get corrupted for other -reasons as well. Perhaps a disk is faulty and is serving wrong data. Either -way, if you end up in a scenario where a node is unhealthy and will refuse to -rejoin the cluster, it is time to do some operations to get the cluster back in -a healthy state. - -It is not recommended to restore an etcd node from a vm snapshot, as that will -cause these kind of time-travelling behaviours which will make the node -unhealthy. To recover from this situation anyway, -I quote from the etcdv2 admin guide https://github.com/etcd-io/etcd/blob/master/Documentation/v2/admin_guide.md - - If a member’s data directory is ever lost or corrupted then the user should - remove the etcd member from the cluster using etcdctl tool. A user should - avoid restarting an etcd member with a data directory from an out-of-date - backup. Using an out-of-date data directory can lead to inconsistency as the - member had agreed to store information via raft then re-joins saying it - needs that information again. For maximum safety, if an etcd member suffers - any sort of data corruption or loss, it must be removed from the cluster. - Once removed the member can be re-added with an empty data directory. - - -Note that this piece of documentation is from etcdv2 and not etcdv3. However -the etcdv3 docs describe a similar procedure here -https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/runtime-configuration.md#replace-a-failed-machine - - -The procedure to remove and add a member is documented here: -https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/runtime-configuration.md#remove-a-member - -It is also documented in the kubernetes documentation: -https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#replacing-a-failed-etcd-member - -So following the above guides step by step, we can recover our cluster to be -healthy again. - -First let us make sure our broken member is stopped by runnning this on ``node``: - -.. code:: sh - - systemctl stop etcd - -Now from a healthy node, e.g. ``node0`` remove the broken node - -.. code:: sh - - etcdctl3.sh member remove e767162297c84b1e - -And we expect the output to be something like - -.. code:: sh - - Member e767162297c84b1e removed from cluster 432c10551aa096af - - -By removing the member from the cluster, you signal the other nodes to not -expect it to come back with the right state. It will be considered dead and -removed from the peers. This will allow the node to come up with an empty data -directory and it not getting kicked out of the cluster. The cluster should now -be healthy, but only have 2 members, and so it is not to resistent to crashes -at the moment! As we can see if we run the health check from a healthy node. - -.. code:: sh - - etcd-health.sh - -And we expect only two nodes to be in the cluster:: - - Cluster-Endpoints: https://127.0.0.1:2379 - cURL Command: curl -X GET https://127.0.0.1:2379/v2/members - member 7c37f7dc10558fae is healthy: got healthy result from https://10.10.1.11:2379 - member cca4e6f315097b3b is healthy: got healthy result from https://10.10.1.10:2379 - cluster is healthy - -Now from a healthy node, re-add the node you just removed. Make sure -to replace the IP in the snippet below with the IP of the node you just removed. - -.. code:: sh - - etcdctl3.sh member add etcd_2 --peer-urls https://10.10.1.12:2380 - -And it should report that it has been added:: - - Member e13b1d076b2f9344 added to cluster 432c10551aa096af - - ETCD_NAME="etcd_2" - ETCD_INITIAL_CLUSTER="etcd_1=https://10.10.1.11:2380,etcd_0=https://10.10.1.10:2380,etcd_2=https://10.10.1.12:2380" - ETCD_INITIAL_CLUSTER_STATE="existing" - - -it should now be in the list as "unstarted" instead of it not being in the list at all. - -.. code:: sh - - etcdctl3.sh member list - - - 7c37f7dc10558fae, started, etcd_1, https://10.10.1.11:2380, https://10.10.1.11:2379 - cca4e6f315097b3b, started, etcd_0, https://10.10.1.10:2380, https://10.10.1.10:2379 - e13b1d076b2f9344, unstarted, , https://10.10.1.12:2380, - - -Now on the broken node, remove the on-disk state, which was corrupted, and start etcd - -.. code:: sh - - mv /var/lib/etcd /var/lib/etcd.bak - sudo systemctl start etcd - -If we run the health check now, the cluster should report its healthy now again. - -.. code:: sh - - etcd-health.sh - -And indeed it outputs so:: - - Cluster-Endpoints: https://127.0.0.1:2379 - cURL Command: curl -X GET https://127.0.0.1:2379/v2/members - member 7c37f7dc10558fae is healthy: got healthy result from https://10.10.1.11:2379 - member cca4e6f315097b3b is healthy: got healthy result from https://10.10.1.10:2379 - member e13b1d076b2f9344 is healthy: got healthy result from https://10.10.1.12:2379 - cluster is healthy - - - diff --git a/docs/src/how-to/administrate/general-linux.md b/docs/src/how-to/administrate/general-linux.md new file mode 100644 index 0000000000..e0f6b694fe --- /dev/null +++ b/docs/src/how-to/administrate/general-linux.md @@ -0,0 +1,67 @@ +# General - Linux + +```{eval-rst} +.. include:: includes/intro.rst +``` + +## Which ports and network interface is my process running on? + +The following shows open TCP ports, and the related processes. + +```sh +sudo netstat -antlp | grep LISTEN +``` + +which may yield output like this: + +```sh +tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1536/sshd +``` + +(how-to-see-tls-certs)= + +## How can I see if my TLS certificates are configured the way I expect? + +```{note} +The following assumes you're querying a server from outside (e.g. your laptop). See the next section if operating on a server from an SSH session. +``` + +You can use openssl to check, with e.g. + +```sh +DOMAIN=example.com +PORT=443 +echo Q | openssl s_client -showcerts -connect $DOMAIN:$PORT +``` + +or + +```sh +DOMAIN=example.com +PORT=443 +echo Q | openssl s_client -showcerts -connect $DOMAIN:$PORT 2>/dev/null | openssl x509 -inform pem -noout -text +``` + +To see only the validity (expiration): + +```sh +DOMAIN=example.com +PORT=443 +echo Q | openssl s_client -showcerts -connect $DOMAIN:$PORT 2>/dev/null | openssl x509 -inform pem -noout -text | grep Validity -A 2 +``` + +## How can I see if my TLS certificates are configured the way I expect (special case kubernetes from a kubernetes machine) + +When you first SSH to a kubernetes node, depending on the setup, DNS may not resolve, in which case you can use the `-servername` parameter: + +```sh +# the IP of the network interface that kubernetes is listening on. 127.0.0.1 may or may not work depending on the installation. It's one of those from +# ifconfig | grep "inet addr" +IP=1.2.3.4 +# PORT can be 443 or 31773, depending on the installation +PORT=443 +# not the root domain, but one of the 5 subdomains for which kubernetes is serving traffic +DOMAIN=app.example.com + +echo Q | openssl s_client -showcerts -servername $DOMAIN -connect $IP:$PORT 2>/dev/null | openssl x509 -inform pem -noout -text | grep Validity -A 2 +``` diff --git a/docs/src/how-to/administrate/general-linux.rst b/docs/src/how-to/administrate/general-linux.rst deleted file mode 100644 index a2c8d81d1d..0000000000 --- a/docs/src/how-to/administrate/general-linux.rst +++ /dev/null @@ -1,69 +0,0 @@ -General - Linux --------------------------- - -.. include:: includes/intro.rst - -Which ports and network interface is my process running on? -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The following shows open TCP ports, and the related processes. - -.. code:: sh - - sudo netstat -antlp | grep LISTEN - -which may yield output like this: - -.. code:: sh - - tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1536/sshd - -.. _how-to-see-tls-certs: - -How can I see if my TLS certificates are configured the way I expect? -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. note:: - The following assumes you're querying a server from outside (e.g. your laptop). See the next section if operating on a server from an SSH session. - -You can use openssl to check, with e.g. - -.. code:: sh - - DOMAIN=example.com - PORT=443 - echo Q | openssl s_client -showcerts -connect $DOMAIN:$PORT - -or - -.. code:: sh - - DOMAIN=example.com - PORT=443 - echo Q | openssl s_client -showcerts -connect $DOMAIN:$PORT 2>/dev/null | openssl x509 -inform pem -noout -text - -To see only the validity (expiration): - -.. code:: sh - - DOMAIN=example.com - PORT=443 - echo Q | openssl s_client -showcerts -connect $DOMAIN:$PORT 2>/dev/null | openssl x509 -inform pem -noout -text | grep Validity -A 2 - - -How can I see if my TLS certificates are configured the way I expect (special case kubernetes from a kubernetes machine) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -When you first SSH to a kubernetes node, depending on the setup, DNS may not resolve, in which case you can use the ``-servername`` parameter: - -.. code:: sh - - # the IP of the network interface that kubernetes is listening on. 127.0.0.1 may or may not work depending on the installation. It's one of those from - # ifconfig | grep "inet addr" - IP=1.2.3.4 - # PORT can be 443 or 31773, depending on the installation - PORT=443 - # not the root domain, but one of the 5 subdomains for which kubernetes is serving traffic - DOMAIN=app.example.com - - echo Q | openssl s_client -showcerts -servername $DOMAIN -connect $IP:$PORT 2>/dev/null | openssl x509 -inform pem -noout -text | grep Validity -A 2 diff --git a/docs/src/how-to/administrate/index.md b/docs/src/how-to/administrate/index.md new file mode 100644 index 0000000000..79a04fa649 --- /dev/null +++ b/docs/src/how-to/administrate/index.md @@ -0,0 +1,12 @@ +# Administration + +```{toctree} +:glob: true +:maxdepth: 2 + +Kubernetes + +* +``` + +% TODO: .. include:: administration/redis.rst diff --git a/docs/src/how-to/administrate/index.rst b/docs/src/how-to/administrate/index.rst deleted file mode 100644 index 5995a82a3c..0000000000 --- a/docs/src/how-to/administrate/index.rst +++ /dev/null @@ -1,14 +0,0 @@ -Administrate components after successful installation -===================================================== - -.. toctree:: - :maxdepth: 2 - :glob: - - Kubernetes - - * - -.. - TODO: .. include:: administration/redis.rst - diff --git a/docs/src/how-to/administrate/kubernetes/certificate-renewal/index.md b/docs/src/how-to/administrate/kubernetes/certificate-renewal/index.md new file mode 100644 index 0000000000..ae9323d55f --- /dev/null +++ b/docs/src/how-to/administrate/kubernetes/certificate-renewal/index.md @@ -0,0 +1,10 @@ +# Certificate renewal + +```{toctree} +:glob: true +:maxdepth: 1 + +* +``` + +% diff --git a/docs/src/how-to/administrate/kubernetes/certificate-renewal/index.rst b/docs/src/how-to/administrate/kubernetes/certificate-renewal/index.rst deleted file mode 100644 index b782d3b107..0000000000 --- a/docs/src/how-to/administrate/kubernetes/certificate-renewal/index.rst +++ /dev/null @@ -1,10 +0,0 @@ -Certificate renewal -=================== - -.. toctree:: - :maxdepth: 1 - :glob: - - * - -.. \ No newline at end of file diff --git a/docs/src/how-to/administrate/kubernetes/certificate-renewal/scenario-1_k8s-v1.14-kubespray.md b/docs/src/how-to/administrate/kubernetes/certificate-renewal/scenario-1_k8s-v1.14-kubespray.md new file mode 100644 index 0000000000..316b644cd0 --- /dev/null +++ b/docs/src/how-to/administrate/kubernetes/certificate-renewal/scenario-1_k8s-v1.14-kubespray.md @@ -0,0 +1,241 @@ +# How to renew certificates on kubernetes 1.14.x + +Kubernetes-internal certificates by default (see assumptions) expire after one year. Without renewal, your installation will cease to function. +This page explains how to renew certificates. + +## Assumptions + +- Kubernetes version 1.14.x + +- installed with the help of [Kubespray](https://github.com/kubernetes-sigs/kubespray) + + - This page was tested using kubespray release 2.10 branch from 2019-05-20, i.e. commit `e2f5a9748e4dbfe2fdba7931198b0b5f1f4bdc7e`. + +- setup: 3 scheduled nodes, each hosting master (control plane) + + worker (kubelet) + etcd (cluster state, key-value database) + +*NOTE: due to Kubernetes being installed with Kubespray, the Kubernetes +CAs (expire after 10yr) as well as certificates involved in etcd +communication (expire after 100yr) are not required to be renewed (any +time soon).* + +**Official documentation:** + +- [Certificate Management with kubeadm (v1.14)](https://v1-14.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/) +- [PKI certificates and requirements (v1.14)](https://v1-14.docs.kubernetes.io/docs/setup/best-practices/certificates/) + +## High-level description + +1. verify current expiration date +2. issue new certificates +3. generate new client configuration (aka kubeconfig file) +4. restart control plane +5. drain node - restart kubelet - uncordon node again +6. repeat 3-5 on all other nodes + +## Step-by-step instructions + +*Please note, that the following instructions may require privileged +execution. So, either switch to a privileged user or prepend following +statements with \`\`sudo\`\`. In any case, it is most likely that every +newly created file has to be owned by \`\`root\`\`, depending on kow +Kubernetes was installed.* + +1. Verify current expiration date on each node + +```bash +export K8S_CERT_DIR=/etc/kubernetes/pki +export ETCD_CERT_DIR=/etc/ssl/etcd/ssl +export KUBELET_CERT_DIR=/var/lib/kubelet/pki + + +for crt in ${K8S_CERT_DIR}/*.crt; do + expirationDate=$(openssl x509 -noout -text -in ${crt} | grep After | sed -e 's/^[[:space:]]*//') + echo "$(basename ${crt}) -- ${expirationDate}" +done + + +for crt in $(ls ${ETCD_CERT_DIR}/*.pem | grep -v 'key'); do + expirationDate=$(openssl x509 -noout -text -in ${crt} | grep After | sed -e 's/^[[:space:]]*//') + echo "$(basename ${crt}) -- ${expirationDate}" +done + +echo "kubelet-client-current.pem -- $(openssl x509 -noout -text -in ${KUBELET_CERT_DIR}/kubelet-client-current.pem | grep After | sed -e 's/^[[:space:]]*//')" +echo "kubelet.crt -- $(openssl x509 -noout -text -in ${KUBELET_CERT_DIR}/kubelet.crt | grep After | sed -e 's/^[[:space:]]*//')" + + +# MASTER: api-server cert +echo -n | openssl s_client -connect localhost:6443 2>&1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | openssl x509 -text -noout | grep Not +# MASTER: controller-manager cert +echo -n | openssl s_client -connect localhost:10257 2>&1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | openssl x509 -text -noout | grep Not +# MASTER: scheduler cert +echo -n | openssl s_client -connect localhost:10259 2>&1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | openssl x509 -text -noout | grep Not + +# WORKER: kubelet cert +echo -n | openssl s_client -connect localhost:10250 2>&1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | openssl x509 -text -noout | grep Not +``` + +2. Allocate a terminal session on one node and backup existing + certificates & configurations + +```bash +cd /etc/kubernetes + +cp -r ./ssl ./ssl.bkp + +cp admin.conf admin.conf.bkp +cp controller-manager.conf controller-manager.conf.bkp +cp scheduler.conf scheduler.conf.bkp +cp kubelet.conf kubelet.conf.bkp +``` + +3. Renew certificates on that very node + +```bash +kubeadm alpha certs renew apiserver +kubeadm alpha certs renew apiserver-kubelet-client +kubeadm alpha certs renew front-proxy-client +``` + +*Looking at the timestamps of the certificates, it is indicated, that apicerver, kubelet & proxy-client have been +renewed. This can be confirmed, by executing parts of (1).* + +``` +root@kubenode01:/etc/kubernetes$ ls -al ./ssl +total 56 +drwxr-xr-x 2 kube root 4096 Mar 20 17:09 . +drwxr-xr-x 5 kube root 4096 Mar 20 17:08 .. +-rw-r--r-- 1 root root 1517 Mar 20 15:12 apiserver.crt +-rw------- 1 root root 1675 Mar 20 15:12 apiserver.key +-rw-r--r-- 1 root root 1099 Mar 20 15:13 apiserver-kubelet-client.crt +-rw------- 1 root root 1675 Mar 20 15:13 apiserver-kubelet-client.key +-rw-r--r-- 1 root root 1025 Sep 23 14:53 ca.crt +-rw------- 1 root root 1679 Sep 23 14:53 ca.key +-rw-r--r-- 1 root root 1038 Sep 23 14:53 front-proxy-ca.crt +-rw------- 1 root root 1679 Sep 23 14:53 front-proxy-ca.key +-rw-r--r-- 1 root root 1058 Mar 20 15:13 front-proxy-client.crt +-rw------- 1 root root 1675 Mar 20 15:13 front-proxy-client.key +-rw------- 1 root root 1679 Sep 23 14:53 sa.key +-rw------- 1 root root 451 Sep 23 14:53 sa.pub +``` + +4. Based on those renewed certificates, generate new kubeconfig files + +The first command assumes it's being executed on a master node. You may need to swap `masters` with `nodes` in +case you are on a different sort of machines. + +```bash +kubeadm alpha kubeconfig user --org system:masters --client-name kubernetes-admin > /etc/kubernetes/admin.conf +kubeadm alpha kubeconfig user --client-name system:kube-controller-manager > /etc/kubernetes/controller-manager.conf +kubeadm alpha kubeconfig user --client-name system:kube-scheduler > /etc/kubernetes/scheduler.conf +``` + +*Again, check if ownership and permission for these files are the same +as all the others around them.* + +And, in case you are operating the cluster from the current node, you may want to replace the user's kubeconfig. +Afterwards, compare the backup version with the new one, to see if any configuration (e.g. pre-configured *namespace*) +might need to be moved over, too. + +```bash +mv ~/.kube/config ~/.kube/config.bkp +cp /etc/kubernetes/admin.conf ~/.kube/config +chown $(id -u):$(id -g) ~/.kube/config +chmod 770 ~/.kube/config +``` + +5. Now that certificates and configuration files are in place, the + control plane must be restarted. They typically run in containers, so + the easiest way to trigger a restart, is to kill the processes + running in there. Use (1) to verify, that the expiration dates indeed + have been changed. + +```bash +kill -s SIGHUP $(pidof kube-apiserver) +kill -s SIGHUP $(pidof kube-controller-manager) +kill -s SIGHUP $(pidof kube-scheduler) +``` + +6. Make *kubelet* aware of the new certificate + +1) Drain the node + +``` +kubectl drain --delete-local-data --ignore-daemonsets $(hostname) +``` + +2. Stop the kubelet process + +``` +systemctl stop kubelet +``` + +3. Remove old certificates and configuration + +``` +mv /var/lib/kubelet/pki{,old} +mkdir /var/lib/kubelet/pki +``` + +4. Generate new kubeconfig file for the kubelet + +``` +kubeadm alpha kubeconfig user --org system:nodes --client-name system:node:$(hostname) > /etc/kubernetes/kubelet.conf +``` + +5. Start kubelet again + +``` +systemctl start kubelet +``` + +6. \[Optional\] Verify kubelet has recognized certificate rotation + +``` +sleep 5 && systemctl status kubelet +``` + +7. Allow workload to be scheduled again on the node + +``` +kubectl uncordon $(hostname) +``` + +7. Copy certificates over to all the other nodes + +Option A - you can ssh from one kubernetes node to another + +```bash +# set the ip or hostname: +export NODE2=root@ip-or-hostname +export NODE3=... + +scp ./ssl/apiserver.* "${NODE2}:/etc/kubernetes/ssl/" +scp ./ssl/apiserver.* "${NODE3}:/etc/kubernetes/ssl/" + +scp ./ssl/apiserver-kubelet-client.* "${NODE2}:/etc/kubernetes/ssl/" +scp ./ssl/apiserver-kubelet-client.* "${NODE3}:/etc/kubernetes/ssl/" + +scp ./ssl/front-proxy-client.* "${NODE2}:/etc/kubernetes/ssl/" +scp ./ssl/front-proxy-client.* "${NODE3}:/etc/kubernetes/ssl/" +``` + +Option B - copy via local administrator's machine + +```bash +# set the ip or hostname: +export NODE1=root@ip-or-hostname +export NODE2= +export NODE3= + +scp -3 "${NODE1}:/etc/kubernetes/ssl/apiserver.*" "${NODE2}:/etc/kubernetes/ssl/" +scp -3 "${NODE1}:/etc/kubernetes/ssl/apiserver.*" "${NODE3}:/etc/kubernetes/ssl/" + +scp -3 "${NODE1}:/etc/kubernetes/ssl/apiserver-kubelet-client.*" "${NODE2}:/etc/kubernetes/ssl/" +scp -3 "${NODE1}:/etc/kubernetes/ssl/apiserver-kubelet-client.*" "${NODE3}:/etc/kubernetes/ssl/" + +scp -3 "${NODE1}:/etc/kubernetes/ssl/front-proxy-client.*" "${NODE2}:/etc/kubernetes/ssl/" +scp -3 "${NODE1}:/etc/kubernetes/ssl/front-proxy-client.*" "${NODE3}:/etc/kubernetes/ssl/" +``` + +8. Continue again with (4) for each node that is left diff --git a/docs/src/how-to/administrate/kubernetes/certificate-renewal/scenario-1_k8s-v1.14-kubespray.rst b/docs/src/how-to/administrate/kubernetes/certificate-renewal/scenario-1_k8s-v1.14-kubespray.rst deleted file mode 100644 index 2db6f4f178..0000000000 --- a/docs/src/how-to/administrate/kubernetes/certificate-renewal/scenario-1_k8s-v1.14-kubespray.rst +++ /dev/null @@ -1,244 +0,0 @@ -How to renew certificates on kubernetes 1.14.x -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Kubernetes-internal certificates by default (see assumptions) expire after one year. Without renewal, your installation will cease to function. -This page explains how to renew certificates. - -Assumptions ------------ - -- Kubernetes version 1.14.x -- installed with the help of `Kubespray `__ - - - This page was tested using kubespray release 2.10 branch from 2019-05-20, i.e. commit ``e2f5a9748e4dbfe2fdba7931198b0b5f1f4bdc7e``. -- setup: 3 scheduled nodes, each hosting master (control plane) + - worker (kubelet) + etcd (cluster state, key-value database) - -*NOTE: due to Kubernetes being installed with Kubespray, the Kubernetes -CAs (expire after 10yr) as well as certificates involved in etcd -communication (expire after 100yr) are not required to be renewed (any -time soon).* - -**Official documentation:** - -* `Certificate Management with kubeadm (v1.14) `__ -* `PKI certificates and requirements (v1.14) `__ - -High-level description ----------------------- - -1. verify current expiration date -2. issue new certificates -3. generate new client configuration (aka kubeconfig file) -4. restart control plane -5. drain node - restart kubelet - uncordon node again -6. repeat 3-5 on all other nodes - -Step-by-step instructions -------------------------- - -*Please note, that the following instructions may require privileged -execution. So, either switch to a privileged user or prepend following -statements with ``sudo``. In any case, it is most likely that every -newly created file has to be owned by ``root``, depending on kow -Kubernetes was installed.* - -1. Verify current expiration date on each node - -.. code:: bash - - - export K8S_CERT_DIR=/etc/kubernetes/pki - export ETCD_CERT_DIR=/etc/ssl/etcd/ssl - export KUBELET_CERT_DIR=/var/lib/kubelet/pki - - - for crt in ${K8S_CERT_DIR}/*.crt; do - expirationDate=$(openssl x509 -noout -text -in ${crt} | grep After | sed -e 's/^[[:space:]]*//') - echo "$(basename ${crt}) -- ${expirationDate}" - done - - - for crt in $(ls ${ETCD_CERT_DIR}/*.pem | grep -v 'key'); do - expirationDate=$(openssl x509 -noout -text -in ${crt} | grep After | sed -e 's/^[[:space:]]*//') - echo "$(basename ${crt}) -- ${expirationDate}" - done - - echo "kubelet-client-current.pem -- $(openssl x509 -noout -text -in ${KUBELET_CERT_DIR}/kubelet-client-current.pem | grep After | sed -e 's/^[[:space:]]*//')" - echo "kubelet.crt -- $(openssl x509 -noout -text -in ${KUBELET_CERT_DIR}/kubelet.crt | grep After | sed -e 's/^[[:space:]]*//')" - - - # MASTER: api-server cert - echo -n | openssl s_client -connect localhost:6443 2>&1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | openssl x509 -text -noout | grep Not - # MASTER: controller-manager cert - echo -n | openssl s_client -connect localhost:10257 2>&1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | openssl x509 -text -noout | grep Not - # MASTER: scheduler cert - echo -n | openssl s_client -connect localhost:10259 2>&1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | openssl x509 -text -noout | grep Not - - # WORKER: kubelet cert - echo -n | openssl s_client -connect localhost:10250 2>&1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | openssl x509 -text -noout | grep Not - -2. Allocate a terminal session on one node and backup existing - certificates & configurations - -.. code:: bash - - cd /etc/kubernetes - - cp -r ./ssl ./ssl.bkp - - cp admin.conf admin.conf.bkp - cp controller-manager.conf controller-manager.conf.bkp - cp scheduler.conf scheduler.conf.bkp - cp kubelet.conf kubelet.conf.bkp - -3. Renew certificates on that very node - -.. code:: bash - - kubeadm alpha certs renew apiserver - kubeadm alpha certs renew apiserver-kubelet-client - kubeadm alpha certs renew front-proxy-client - -*Looking at the timestamps of the certificates, it is indicated, that apicerver, kubelet & proxy-client have been -renewed. This can be confirmed, by executing parts of (1).* - -:: - - root@kubenode01:/etc/kubernetes$ ls -al ./ssl - total 56 - drwxr-xr-x 2 kube root 4096 Mar 20 17:09 . - drwxr-xr-x 5 kube root 4096 Mar 20 17:08 .. - -rw-r--r-- 1 root root 1517 Mar 20 15:12 apiserver.crt - -rw------- 1 root root 1675 Mar 20 15:12 apiserver.key - -rw-r--r-- 1 root root 1099 Mar 20 15:13 apiserver-kubelet-client.crt - -rw------- 1 root root 1675 Mar 20 15:13 apiserver-kubelet-client.key - -rw-r--r-- 1 root root 1025 Sep 23 14:53 ca.crt - -rw------- 1 root root 1679 Sep 23 14:53 ca.key - -rw-r--r-- 1 root root 1038 Sep 23 14:53 front-proxy-ca.crt - -rw------- 1 root root 1679 Sep 23 14:53 front-proxy-ca.key - -rw-r--r-- 1 root root 1058 Mar 20 15:13 front-proxy-client.crt - -rw------- 1 root root 1675 Mar 20 15:13 front-proxy-client.key - -rw------- 1 root root 1679 Sep 23 14:53 sa.key - -rw------- 1 root root 451 Sep 23 14:53 sa.pub - -4. Based on those renewed certificates, generate new kubeconfig files - -The first command assumes it's being executed on a master node. You may need to swap ``masters`` with ``nodes`` in -case you are on a different sort of machines. - -.. code:: bash - - kubeadm alpha kubeconfig user --org system:masters --client-name kubernetes-admin > /etc/kubernetes/admin.conf - kubeadm alpha kubeconfig user --client-name system:kube-controller-manager > /etc/kubernetes/controller-manager.conf - kubeadm alpha kubeconfig user --client-name system:kube-scheduler > /etc/kubernetes/scheduler.conf - -*Again, check if ownership and permission for these files are the same -as all the others around them.* - -And, in case you are operating the cluster from the current node, you may want to replace the user's kubeconfig. -Afterwards, compare the backup version with the new one, to see if any configuration (e.g. pre-configured *namespace*) -might need to be moved over, too. - -.. code:: bash - - mv ~/.kube/config ~/.kube/config.bkp - cp /etc/kubernetes/admin.conf ~/.kube/config - chown $(id -u):$(id -g) ~/.kube/config - chmod 770 ~/.kube/config - -5. Now that certificates and configuration files are in place, the - control plane must be restarted. They typically run in containers, so - the easiest way to trigger a restart, is to kill the processes - running in there. Use (1) to verify, that the expiration dates indeed - have been changed. - -.. code:: bash - - kill -s SIGHUP $(pidof kube-apiserver) - kill -s SIGHUP $(pidof kube-controller-manager) - kill -s SIGHUP $(pidof kube-scheduler) - -6. Make *kubelet* aware of the new certificate - -a) Drain the node - -:: - - kubectl drain --delete-local-data --ignore-daemonsets $(hostname) - -b) Stop the kubelet process - -:: - - systemctl stop kubelet - -c) Remove old certificates and configuration - -:: - - mv /var/lib/kubelet/pki{,old} - mkdir /var/lib/kubelet/pki - -d) Generate new kubeconfig file for the kubelet - -:: - - kubeadm alpha kubeconfig user --org system:nodes --client-name system:node:$(hostname) > /etc/kubernetes/kubelet.conf - -e) Start kubelet again - -:: - - systemctl start kubelet - -f) [Optional] Verify kubelet has recognized certificate rotation - -:: - - sleep 5 && systemctl status kubelet - -g) Allow workload to be scheduled again on the node - -:: - - kubectl uncordon $(hostname) - -7. Copy certificates over to all the other nodes - -Option A - you can ssh from one kubernetes node to another - -.. code:: bash - - # set the ip or hostname: - export NODE2=root@ip-or-hostname - export NODE3=... - - scp ./ssl/apiserver.* "${NODE2}:/etc/kubernetes/ssl/" - scp ./ssl/apiserver.* "${NODE3}:/etc/kubernetes/ssl/" - - scp ./ssl/apiserver-kubelet-client.* "${NODE2}:/etc/kubernetes/ssl/" - scp ./ssl/apiserver-kubelet-client.* "${NODE3}:/etc/kubernetes/ssl/" - - scp ./ssl/front-proxy-client.* "${NODE2}:/etc/kubernetes/ssl/" - scp ./ssl/front-proxy-client.* "${NODE3}:/etc/kubernetes/ssl/" - -Option B - copy via local administrator's machine - -.. code:: bash - - # set the ip or hostname: - export NODE1=root@ip-or-hostname - export NODE2= - export NODE3= - - scp -3 "${NODE1}:/etc/kubernetes/ssl/apiserver.*" "${NODE2}:/etc/kubernetes/ssl/" - scp -3 "${NODE1}:/etc/kubernetes/ssl/apiserver.*" "${NODE3}:/etc/kubernetes/ssl/" - - scp -3 "${NODE1}:/etc/kubernetes/ssl/apiserver-kubelet-client.*" "${NODE2}:/etc/kubernetes/ssl/" - scp -3 "${NODE1}:/etc/kubernetes/ssl/apiserver-kubelet-client.*" "${NODE3}:/etc/kubernetes/ssl/" - - scp -3 "${NODE1}:/etc/kubernetes/ssl/front-proxy-client.*" "${NODE2}:/etc/kubernetes/ssl/" - scp -3 "${NODE1}:/etc/kubernetes/ssl/front-proxy-client.*" "${NODE3}:/etc/kubernetes/ssl/" - -8. Continue again with (4) for each node that is left diff --git a/docs/src/how-to/administrate/kubernetes/index.md b/docs/src/how-to/administrate/kubernetes/index.md new file mode 100644 index 0000000000..cc2c6a0143 --- /dev/null +++ b/docs/src/how-to/administrate/kubernetes/index.md @@ -0,0 +1,20 @@ +# Kubernetes + +```{note} +These are not the official documentations you are looking for. +[This way](https://kubernetes.io/docs/tasks/administer-cluster/) please. + +The content referred below merely contains either some deviation from upstream or +additional information enriched here and there with shortcuts to the official documentation. +``` + +```{toctree} +:glob: true +:maxdepth: 1 + +Certificate renewal +How to restart a machine that is part of a Kubernetes cluster? +How to upgrade Kubernetes? +``` + +% diff --git a/docs/src/how-to/administrate/kubernetes/index.rst b/docs/src/how-to/administrate/kubernetes/index.rst deleted file mode 100644 index 2e6fcd71da..0000000000 --- a/docs/src/how-to/administrate/kubernetes/index.rst +++ /dev/null @@ -1,21 +0,0 @@ -Kubernetes -========== - -.. note:: - - These are not the official documentations you are looking for. - `This way `__ please. - - The content referred below merely contains either some deviation from upstream or - additional information enriched here and there with shortcuts to the official documentation. - - -.. toctree:: - :maxdepth: 1 - :glob: - - Certificate renewal - How to restart a machine that is part of a Kubernetes cluster? - How to upgrade Kubernetes? - -.. diff --git a/docs/src/how-to/administrate/kubernetes/restart-machines/index.md b/docs/src/how-to/administrate/kubernetes/restart-machines/index.md new file mode 100644 index 0000000000..0323efcf2d --- /dev/null +++ b/docs/src/how-to/administrate/kubernetes/restart-machines/index.md @@ -0,0 +1,42 @@ +(restarting-a-machine-in-a-kubernetes-cluster)= + +# Restarting a machine in a Kubernetes cluster + +```{note} +1. Know which kind of machine is going to be restarted + + > 1. control plane (api-server, controllers, etc.) + > 2. node (runs actual workload, e.g. *Brig* or *Webapp*) + > 3. *a* and *b* combined + +2. The kind of machine in question must be deployed redundantly + +3. Take out machines in a rolling fashion (sequentially, one at a time) +``` + +## Control plane + +Depending on whether *etcd* is hosted on the same machine alongside the control plane (common practise), you need +to take its implications into account (see {ref}`How to rolling-restart an etcd cluster `) +when restarting a machine. + +Regardless of where *etcd* is located, before turning off any machine that is part of the control plane, one should +{ref}`back up the cluster state `. + +If a part of the control plane does not run sufficiently redundant, it is advised to prevent any mutating interaction +during the procedure, until the cluster is healthy again. + +```bash +kubectl get nodes +``` + +## Node + +```{rubric} High-level steps: +``` + +1. Drain the node so that all workload is rescheduled on other nodes +2. Restart / Update / Decommission +3. Mark the node as being schedulable again (if not decommissioned) + +*For more details please refer to the official documentation:* [Safely Drain a Node](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) diff --git a/docs/src/how-to/administrate/kubernetes/restart-machines/index.rst b/docs/src/how-to/administrate/kubernetes/restart-machines/index.rst deleted file mode 100644 index 4f4a315a93..0000000000 --- a/docs/src/how-to/administrate/kubernetes/restart-machines/index.rst +++ /dev/null @@ -1,45 +0,0 @@ -.. _restarting-a-machine-in-a-kubernetes-cluster: - -Restarting a machine in a Kubernetes cluster -============================================ - -.. note:: - - 1. Know which kind of machine is going to be restarted - - a) control plane (api-server, controllers, etc.) - b) node (runs actual workload, e.g. *Brig* or *Webapp*) - c) *a* and *b* combined - - 2. The kind of machine in question must be deployed redundantly - 3. Take out machines in a rolling fashion (sequentially, one at a time) - - -Control plane -~~~~~~~~~~~~~ - -Depending on whether *etcd* is hosted on the same machine alongside the control plane (common practise), you need -to take its implications into account (see :ref:`How to rolling-restart an etcd cluster `) -when restarting a machine. - -Regardless of where *etcd* is located, before turning off any machine that is part of the control plane, one should -:ref:`back up the cluster state `. - -If a part of the control plane does not run sufficiently redundant, it is advised to prevent any mutating interaction -during the procedure, until the cluster is healthy again. - -.. code:: bash - - kubectl get nodes - - -Node -~~~~ - -.. rubric:: High-level steps: - -1. Drain the node so that all workload is rescheduled on other nodes -2. Restart / Update / Decommission -3. Mark the node as being schedulable again (if not decommissioned) - -*For more details please refer to the official documentation:* `Safely Drain a Node `__ diff --git a/docs/src/how-to/administrate/kubernetes/upgrade-cluster/index.md b/docs/src/how-to/administrate/kubernetes/upgrade-cluster/index.md new file mode 100644 index 0000000000..739ae7ee2c --- /dev/null +++ b/docs/src/how-to/administrate/kubernetes/upgrade-cluster/index.md @@ -0,0 +1,75 @@ +# Upgrading a Kubernetes cluster + +Before upgrading Kubernetes, a couple of aspects should be taken into account: + +- downtime is (not) permitted +- stateful backing services that run outside or on top of Kubernetes + +As a result the following questions arise: + +1. Is an in-place upgrade required (reuse existing machines) or is it possible to + deploy a second cluster right next to the first one and install Wire on top? +2. How was the Kubernetes cluster deployed? + +Depending on the deployment method, the upgrade procedure may vary. It may be reasonable to test +the upgrade in a non-production environment first. +Regardless of the deployment method, it is recommended to {ref}`back up the cluster state +` before starting to upgrade the cluster. Additional background knowledge +can be found in the section about {ref}`restarting a machine in an kubernetes cluster `. + +```{warning} +For an in-place upgrade, it is *NOT* recommended to go straight to the latest Kubernetes +version. Instead, one should upgrade step by step between each minor version. +``` + +## Manually + +Doing an upgrade by hand is cumbersome and error-prone, which is why there are tools and +automation for this procedure. The high-level steps would be: + +1. upgrade the control plane (also see a more detailed [list](https://kubernetes.io/docs/tasks/administer-cluster/cluster-upgrade/#manual-deployments)) + : 1. all *etcd* instances + 2. api-server on each control-plane host + 3. controllers, scheduler, +2. upgrade the nodes (order may vary, depending on whether the kube-components run in containers) + : - kubelet + - kube-proxy + - container runtime +3. then upgrade the clients (`kubectl`, e.g. on workstations or in pipelines) + +*For more details, please refer to the official documentation:* +[Upgrade A Cluster](https://kubernetes.io/docs/tasks/administer-cluster/cluster-upgrade/) + +## Kubespray (Ansible) + +Kubespray comes with a dedicated playbook that should be used to perform the upgrade: +`upgrade-cluster.yml`. Before running the playbook, make sure that the right Kubespray version +is being used. Each Kubespray version supports only a small and sliding window of Kubernetes +versions (check `kube_version` & `kube_version_min_required` in `roles/kubespray-defaults/defaults/main.yaml` +for a given [release version tag](https://github.com/kubernetes-sigs/kubespray/releases)). + +The commands may look similar to this example (assuming Kubernetes v1.18 version installed +with Kubespray 2.14): + +```bash +git clone https://github.com/kubernetes-sigs/kubespray +cd kubespray +git checkout release-2.15 +${EDITOR} roles/kubespray-defaults/defaults/main.yaml + +ansible-playbook -i ./../path/my/inventory-dir -e kube_version=v1.19.7 ./upgrade-cluster.yml +``` + +% TODO: adjust the example showing how to run this with wire-server-deploy a/o the offline toolchain container image + +% TODO: add ref to the part of this documentation that talks about the air-gapped installation + +Kubespray takes care of bringing the new binaries into position on each machine, restarting +the components, and draining/uncordon nodes. + +*For more details please refer to the official documentation:* +[Upgrading Kubernetes in Kubespray](https://kubespray.io/#/docs/upgrades) + +## Kubeadm + +Please refer to the *official documentation:* [Upgrading kubeadm clusters](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) diff --git a/docs/src/how-to/administrate/kubernetes/upgrade-cluster/index.rst b/docs/src/how-to/administrate/kubernetes/upgrade-cluster/index.rst deleted file mode 100644 index 1c09a137f9..0000000000 --- a/docs/src/how-to/administrate/kubernetes/upgrade-cluster/index.rst +++ /dev/null @@ -1,82 +0,0 @@ -Upgrading a Kubernetes cluster -============================== - -Before upgrading Kubernetes, a couple of aspects should be taken into account: - -* downtime is (not) permitted -* stateful backing services that run outside or on top of Kubernetes - -As a result the following questions arise: - -1. Is an in-place upgrade required (reuse existing machines) or is it possible to - deploy a second cluster right next to the first one and install Wire on top? -2. How was the Kubernetes cluster deployed? - -Depending on the deployment method, the upgrade procedure may vary. It may be reasonable to test -the upgrade in a non-production environment first. -Regardless of the deployment method, it is recommended to :ref:`back up the cluster state -` before starting to upgrade the cluster. Additional background knowledge -can be found in the section about :ref:`restarting a machine in an kubernetes cluster `. - - -.. warning:: - - For an in-place upgrade, it is *NOT* recommended to go straight to the latest Kubernetes - version. Instead, one should upgrade step by step between each minor version. - - -Manually -~~~~~~~~ - -Doing an upgrade by hand is cumbersome and error-prone, which is why there are tools and -automation for this procedure. The high-level steps would be: - -1. upgrade the control plane (also see a more detailed `list `__) - a) all *etcd* instances - b) api-server on each control-plane host - c) controllers, scheduler, -2. upgrade the nodes (order may vary, depending on whether the kube-components run in containers) - * kubelet - * kube-proxy - * container runtime -3. then upgrade the clients (``kubectl``, e.g. on workstations or in pipelines) - -*For more details, please refer to the official documentation:* -`Upgrade A Cluster `__ - - -Kubespray (Ansible) -~~~~~~~~~~~~~~~~~~~ - -Kubespray comes with a dedicated playbook that should be used to perform the upgrade: -``upgrade-cluster.yml``. Before running the playbook, make sure that the right Kubespray version -is being used. Each Kubespray version supports only a small and sliding window of Kubernetes -versions (check ``kube_version`` & ``kube_version_min_required`` in ``roles/kubespray-defaults/defaults/main.yaml`` -for a given `release version tag `__). - -The commands may look similar to this example (assuming Kubernetes v1.18 version installed -with Kubespray 2.14): - -.. code:: bash - - git clone https://github.com/kubernetes-sigs/kubespray - cd kubespray - git checkout release-2.15 - ${EDITOR} roles/kubespray-defaults/defaults/main.yaml - - ansible-playbook -i ./../path/my/inventory-dir -e kube_version=v1.19.7 ./upgrade-cluster.yml - -.. TODO: adjust the example showing how to run this with wire-server-deploy a/o the offline toolchain container image -.. TODO: add ref to the part of this documentation that talks about the air-gapped installation - -Kubespray takes care of bringing the new binaries into position on each machine, restarting -the components, and draining/uncordon nodes. - -*For more details please refer to the official documentation:* -`Upgrading Kubernetes in Kubespray `__ - - -Kubeadm -~~~~~~~ - -Please refer to the *official documentation:* `Upgrading kubeadm clusters `__ diff --git a/docs/src/how-to/administrate/minio.rst b/docs/src/how-to/administrate/minio.md similarity index 58% rename from docs/src/how-to/administrate/minio.rst rename to docs/src/how-to/administrate/minio.md index 6953d1355c..1a79ba648d 100644 --- a/docs/src/how-to/administrate/minio.rst +++ b/docs/src/how-to/administrate/minio.md @@ -1,20 +1,18 @@ -Minio ------- +# Minio +```{eval-rst} .. include:: includes/intro.rst +``` -This section only covers the bare minimum, for more information, see the `minio documentation `__ +This section only covers the bare minimum, for more information, see the [minio documentation](https://docs.min.io/) - -Should you be using minio? -=========================== +## Should you be using minio? Minio can be used to emulate an S3-compatible setup. When a native S3-like storage provider is already present in your network or cloud provider, we advise using that instead. -Setting up interaction with Minio -================================= +## Setting up interaction with Minio Minio can be installed on your servers using our provided ansible playbooks. The ansible playbook will also install the minio client and configure it to @@ -25,29 +23,33 @@ minio to run behind a loadbalancer like HAProxy, and configure the Minio client to point to this loadbalancer instead. Our ansible playbooks will also configure the minio client and adds the locally -reachable API under the ``local`` alias:: +reachable API under the `local` alias: - mc config host list +``` +mc config host list +``` -If it is not there, it can be added manually as follows:: +If it is not there, it can be added manually as follows: - mc config host add local http://localhost:9000 +``` +mc config host add local http://localhost:9000 +``` The status of the cluster can be requested by contacting any of the servers. In -our case we will contact the locally running server:: +our case we will contact the locally running server: - mc admin info local +``` +mc admin info local +``` -Minio maintenance -================= +## Minio maintenance There will be times where one wants to take a minio server down for maintenance. One might want to apply security patches, or want to take out a broken disk and replace it with a fixed one. Minio will not tell you the health status of disks. You should have separate alerting and monitoring in place to keep track of hardware health. For example, one could look at -S.M.A.R.T. values that the disks produce with Prometheus `node_exporter -`_ +S.M.A.R.T. values that the disks produce with Prometheus [node_exporter](https://github.com/prometheus-community/node-exporter-textfile-collector-scripts/blob/master/smartmon.sh) Special care has to be taken when restarting Minio nodes, but it should be safe to do so. Minio can operate in read-write mode with (N/2) + 1 instances @@ -62,9 +64,11 @@ interrupted and the user must retry. When you shut down a node, one should take precautions that subsequent API calls are sent to other nodes in the cluster. -To stop a server, type:: +To stop a server, type: - systemctl stop minio-server +``` +systemctl stop minio-server +``` Writes that happen during the server being down will not be synced to the server that is offline. It is important that once you bring the server back @@ -77,32 +81,40 @@ is thus recommended to heal an instance immediately once it is back up; before a restart any other instances. Now that the server is offline, perform any maintenance that you want to do. -Afterwards, restart it with:: +Afterwards, restart it with: - systemctl start minio-server +``` +systemctl start minio-server +``` -Now check:: +Now check: - mc admin info local +``` +mc admin info local +``` to see if the cluster is healthy. Now that the server is back online, it has missed writes that have happened whilst it was offline. Because of this we must heal the cluster now -A heal of the cluster is performed as follows:: +A heal of the cluster is performed as follows: - mc admin heal -r local +``` +mc admin heal -r local +``` -Which will show a result page that looks like this:: +Which will show a result page that looks like this: - ◑ bunny - 0/0 objects; 0 B in 2s - ┌────────┬───┬─────────────────────┐ - │ Green │ 2 │ 66.7% ████████ │ - │ Yellow │ 1 │ 33.3% ████ │ - │ Red │ 0 │ 0.0% │ - │ Grey │ 0 │ 0.0% │ - └────────┴───┴─────────────────────┘ +``` +◑ bunny + 0/0 objects; 0 B in 2s + ┌────────┬───┬─────────────────────┐ + │ Green │ 2 │ 66.7% ████████ │ + │ Yellow │ 1 │ 33.3% ████ │ + │ Red │ 0 │ 0.0% │ + │ Grey │ 0 │ 0.0% │ + └────────┴───┴─────────────────────┘ +``` green - all good yellow - healed partially @@ -110,33 +122,32 @@ red - quorum missing grey - more than quorum number shards are gone, means the object for some reason is not recoverable When there are any yellow items, it usually means that not all servers have seen -the node come up properly again. Running the heal command with the ``--json`` option +the node come up properly again. Running the heal command with the `--json` option will give you more verbose and precise information why the heal only happened partially. -.. code:: json - - { - "after" : { - "online" : 5, - "offline" : 1, - "missing" : 0, - "corrupted" : 0, - "drives" : [ - { - "endpoint" : "http://10.0.0.42:9091/var/lib/minio-server1", - "state" : "offline", - "uuid" : "" - }, - { - "uuid" : "", - "endpoint" : "/var/lib/minio-server1", - "state" : "ok" - } - ], - "color" : "yellow" - } - } - +```json +{ + "after" : { + "online" : 5, + "offline" : 1, + "missing" : 0, + "corrupted" : 0, + "drives" : [ + { + "endpoint" : "http://10.0.0.42:9091/var/lib/minio-server1", + "state" : "offline", + "uuid" : "" + }, + { + "uuid" : "", + "endpoint" : "/var/lib/minio-server1", + "state" : "ok" + } + ], + "color" : "yellow" + } +} +``` In our case, we see that the reason for the partial recovery was that one the server was still considered offline. Rerunning the command yielded @@ -158,97 +169,96 @@ thus important to have good monitoring in place and respond accordingly. Minio itself will auto-heal the cluster every month if the administrator doesn't trigger a heal themselves. - -Rotate root credentials -======================= +## Rotate root credentials In order to change the root credentials, one needs to restart minio once but set with the old and the new credentials at the same time. -If you installed minio with the Ansible, the `role `__ +If you installed minio with the Ansible, the [role](https://github.com/wireapp/ansible-minio) takes care of this. Just change the inventory accordingly and re-apply the role. -For more information, please refer to the *Credentials* section in the `official documentation `__. +For more information, please refer to the *Credentials* section in the [official documentation](https://docs.min.io/docs/minio-server-configuration-guide.html). -Check the health of a MinIO node -================================ +(check-the-health-of-a-minio-node)= -This is the procedure to check a minio node's health. +## Check the health of a MinIO node -First log into the minio server +This is the procedure to check a minio node's health -.. code:: sh +First log into the minio server - ssh +```sh +ssh +``` There, run the following commands: -.. code:: sh - - env $(sudo grep KEY /etc/default/minio-server1 | xargs) bash - export MC_HOST_local="http://$MINIO_ACCESS_KEY:$MINIO_SECRET_KEY@127.0.0.1:9000" - mc admin info local +```sh +env $(sudo grep KEY /etc/default/minio-server1 | xargs) bash +export MC_HOST_local="http://$MINIO_ACCESS_KEY:$MINIO_SECRET_KEY@127.0.0.1:9000" +mc admin info local +``` You should see a result similar to this: -.. code:: sh - - * 192.168.0.12:9092 - Uptime: 2 months - Version: 2020-10-28T08:16:50Z - Network: 6/6 OK - Drives: 1/1 OK - - * 192.168.0.22:9000 - Uptime: 2 months - Version: 2020-10-28T08:16:50Z - Network: 6/6 OK - Drives: 1/1 OK - - * 192.168.0.22:9092 - Uptime: 2 months - Version: 2020-10-28T08:16:50Z - Network: 6/6 OK - Drives: 1/1 OK - - * 192.168.0.32:9000 - Uptime: 2 months - Version: 2020-10-28T08:16:50Z - Network: 6/6 OK - Drives: 1/1 OK - - * 192.168.0.32:9092 - Uptime: 2 months - Version: 2020-10-28T08:16:50Z - Network: 6/6 OK - Drives: 1/1 OK - - * 192.168.0.12:9000 - Uptime: 2 months - Version: 2020-10-28T08:16:50Z - Network: 6/6 OK - Drives: 1/1 OK - -Make sure you see ``Network: 6/6 OK``. +```sh +* 192.168.0.12:9092 +Uptime: 2 months +Version: 2020-10-28T08:16:50Z +Network: 6/6 OK +Drives: 1/1 OK + +* 192.168.0.22:9000 +Uptime: 2 months +Version: 2020-10-28T08:16:50Z +Network: 6/6 OK +Drives: 1/1 OK + +* 192.168.0.22:9092 +Uptime: 2 months +Version: 2020-10-28T08:16:50Z +Network: 6/6 OK +Drives: 1/1 OK + +* 192.168.0.32:9000 +Uptime: 2 months +Version: 2020-10-28T08:16:50Z +Network: 6/6 OK +Drives: 1/1 OK + +* 192.168.0.32:9092 +Uptime: 2 months +Version: 2020-10-28T08:16:50Z +Network: 6/6 OK +Drives: 1/1 OK + +* 192.168.0.12:9000 +Uptime: 2 months +Version: 2020-10-28T08:16:50Z +Network: 6/6 OK +Drives: 1/1 OK +``` + +Make sure you see `Network: 6/6 OK`. Reboot the machine with: -.. code:: sh - - sudo reboot +```sh +sudo reboot +``` Then wait at least a minute. If you go to ssh in, and get 'Connection refused', it just means you need to wait a bit longer. -Tip: You can automatically ask SSH to attempt to connect until it is succesful, by using the following command: - -.. code:: sh +Tip: You can automatically ask SSH to attempt to connect until it is succesful, by using the following command: - ssh -o 'ConnectionAttempts 3600' exit +```sh +ssh -o 'ConnectionAttempts 3600' exit +``` Log into minio ( repeat the steps above ), and check again. You should see a very low uptime value on two hosts now. -This is because we install minio 'twice' on each host. \ No newline at end of file +This is because we install minio 'twice' on each host. diff --git a/docs/src/how-to/administrate/operations.md b/docs/src/how-to/administrate/operations.md new file mode 100644 index 0000000000..9a8b8522a6 --- /dev/null +++ b/docs/src/how-to/administrate/operations.md @@ -0,0 +1,139 @@ +# Operational procedures + +This section describes common operations to be performed on operational clusters. + +## Reboot procedures + +The general procedure to reboot a service is as follows: + +- 1. {ref}`Check the health ` of the service. (If the health isn't good search for "troubleshooting" in the documentation. If it is good, move to the next step.) +- 2. Reboot the server the service is running on. +- 3. {ref}`Check the health ` of the service **again**. (If the health isn't good search for "troubleshooting" in the documentation. If it is good, your reboot was succesful.) + +The method for checking health is different for each service type, you can find a list of those methods {ref}`here `. + +The method to reset a service is the same for most services, except for `restund`, for which the procedure is different, and can be found {ref}`here `. + +For other (non-`restund`) services, the procedure is as follows: + +Assuming in this example you are trying to reboot a minio server, follow these steps: + +First, {ref}`check the health ` of the services. + +Second, reboot the services: + +```sh +ssh -t sudo reboot +``` + +Third, wait until the service is up again by trying to connect to it via SSH : + +```sh +ssh -o 'ConnectionAttempts 3600' exit +``` + +(`ConnectionAttempts` will make it so it attempts to connect until the host is actually Up and the connection is succesful) + +Fourth, {ref}`check the health ` of the service again. + +(operations-health-checks)= + +## Health checks + +This is a list of the health-checking procedures currently documented, for different service types: + +- {ref}`MinIO ` +- {ref}`Cassandra ` +- {ref}`Elasticsearch ` +- {ref}`Etcd ` +- {ref}`Restund ` (the health check is explained as part of the reboot procedure). + +To check the health of different services not listed here, see the documentation for that specific project, or ask your Wire contact. + +```{note} +If a service is running inside a Kubernetes pod, checking its health is easy: if the pod is running, it is healthy. A non-healthy pod will stop running, and will be shown as such. +``` + +## Draining pods from a node for maintainance + +You might want to remove («drain») all pods from a specific node/server, so you can do maintainance work on it, without disrupting the entire cluster. + +If you want to do this, you should follow the procudure found at: + +In short, the procedure is essentially: + +First, identify the name of the node you wish to drain. You can list all of the nodes in your cluster with + +```sh +kubectl get nodes +``` + +Next, tell Kubernetes to drain the node: + +```sh +kubectl drain +``` + +Once it returns (without giving an error), you can power down the node (or equivalently, if on a cloud platform, delete the virtual machine backing the node). If you leave the node in the cluster during the maintenance operation, you need to run + +```sh +kubectl uncordon +``` + +afterwards to tell Kubernetes that it can resume scheduling new pods onto the node. + +## Understand release tags + +We have two major release tags that you sometimes want to map on each other: *github*, and *helm chart*. + +Github have a tag of the form `vYYYY-MM-DD`, and the release notes and (some build artefacts) can be found on github, eg., [here](https://github.com/wireapp/wire-server/releases/v2022-01-18). Helm chart tags have the form `N.NNN.0`. The minor version `0` is for the development branch; non-zero values refer to unreleased intermediate states. + +### On the command line + +You can find the github tag for a helm chart tag like this: + +```sh +git tag --points-at v2022-01-18 | sort +``` + +... and the other way around like this: + +```sh +git tag --points-at chart=2.122.0,image=2.122.0 | sort +``` + +Note that the actual tag has the form `chart=,image=`. + +Unfortunately, older releases may have more helm chart tags; you need to find the largest number that has the form `N.NNN.0` from the list yourself. + +A list of all releases can be produced like this: + +```sh +git log --decorate --first-parent origin/master +``` + +If you want to find the + +### In the github UI + +Consult [the changelog](https://github.com/wireapp/wire-server/blob/develop/CHANGELOG.md) +to find the github tag of the release you're interested in (say, +v2022-01-18). + +Visit [the release notes of that release](https://github.com/wireapp/wire-server/releases/v2022-01-18). +Click on the commit hash: + +```{image} operations/fig1.png +``` + +Click on the 3 dots: + +```{image} operations/fig2.png +``` + +Now you can see a (possibly rather long) list of tags, some of then +have the form `chart=N.NNN.0,image=N.NNN.0`. Pick the one with the +largest number. + +```{image} operations/fig3.png +``` diff --git a/docs/src/how-to/administrate/operations.rst b/docs/src/how-to/administrate/operations.rst deleted file mode 100644 index bee240acb1..0000000000 --- a/docs/src/how-to/administrate/operations.rst +++ /dev/null @@ -1,144 +0,0 @@ - -Operational procedures -====================== - -This section describes common operations to be performed on operational clusters. - -Reboot procedures ------------------ - -The general procedure to reboot a service is as follows: - -* 1. `Check the health `__ of the service. (If the health isn't good, move to `troubleshooting `__. If it is good, move to the next step.) -* 2. Reboot the server the service is running on. -* 3. `Check the health `__ of the service **again**. (If the health isn't good, move to `troubleshooting `__. If it is good, your reboot was succesful.) - -The method for checking health is different for each service type, you can find a list of those methods `here `__. - -The method to reset a service is the same for most services, except for ``restund``, for which the procedure is different, and can be found `here `__. - -For other (non-``restund``) services, the procedure is as follows: - -Assuming in this example you are trying to reboot a minio server, follow these steps: - -First, `check the health `__ of the services. - -Second, reboot the services: - -.. code:: sh - - ssh -t sudo reboot - -Third, wait until the service is up again by trying to connect to it via SSH : - -.. code:: sh - - ssh -o 'ConnectionAttempts 3600' exit - -(``ConnectionAttempts`` will make it so it attempts to connect until the host is actually Up and the connection is succesful) - -Fourth, `check the health `__ of the service again. - -Health checks -------------- - -This is a list of the health-checking procedures currently documented, for different service types: - -* `MinIO `__. -* `Cassandra `__. -* `elasticsearch `__. -* `Etcd `__. -* `Restund `__ (the health check is explained as part of the reboot procedure). - -To check the health of different services not listed here, see the documentation for that specific project, or ask your Wire contact. - -.. note:: - - If a service is running inside a Kubernetes pod, checking its health is easy: if the pod is running, it is healthy. A non-healthy pod will stop running, and will be shown as such. - -Draining pods from a node for maintainance ------------------------------------------- - -You might want to remove («drain») all pods from a specific node/server, so you can do maintainance work on it, without disrupting the entire cluster. - -If you want to do this, you should follow the procudure found at: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/ - -In short, the procedure is essentially: - -First, identify the name of the node you wish to drain. You can list all of the nodes in your cluster with - -.. code:: sh - - kubectl get nodes - -Next, tell Kubernetes to drain the node: - -.. code:: sh - - kubectl drain - -Once it returns (without giving an error), you can power down the node (or equivalently, if on a cloud platform, delete the virtual machine backing the node). If you leave the node in the cluster during the maintenance operation, you need to run - -.. code:: sh - - kubectl uncordon - -afterwards to tell Kubernetes that it can resume scheduling new pods onto the node. - -Understand release tags ------------------------ - -We have two major release tags that you sometimes want to map on each other: *github*, and *helm chart*. - -Github have a tag of the form `vYYYY-MM-DD`, and the release notes and (some build artefacts) can be found on github, eg., `here `__. Helm chart tags have the form `N.NNN.0`. The minor version `0` is for the development branch; non-zero values refer to unreleased intermediate states. - -On the command line -^^^^^^^^^^^^^^^^^^^ - -You can find the github tag for a helm chart tag like this: - -.. code:: sh - - git tag --points-at v2022-01-18 | sort - -... and the other way around like this: - -.. code:: sh - - git tag --points-at chart=2.122.0,image=2.122.0 | sort - -Note that the actual tag has the form `chart=,image=`. - -Unfortunately, older releases may have more helm chart tags; you need to find the largest number that has the form `N.NNN.0` from the list yourself. - -A list of all releases can be produced like this: - -.. code:: sh - - git log --decorate --first-parent origin/master - -If you want to find the - -In the github UI -^^^^^^^^^^^^^^^^ - -Consult `the changelog -`__ -to find the github tag of the release you're interested in (say, -v2022-01-18). - -Visit `the release notes of that release -`__. -Click on the commit hash: - -.. image:: operations/fig1.png - -Click on the 3 dots: - -.. image:: operations/fig2.png - -Now you can see a (possibly rather long) list of tags, some of then -have the form `chart=N.NNN.0,image=N.NNN.0`. Pick the one with the -largest number. - -.. image:: operations/fig3.png diff --git a/docs/src/how-to/administrate/restund.md b/docs/src/how-to/administrate/restund.md new file mode 100644 index 0000000000..86bdd27e6a --- /dev/null +++ b/docs/src/how-to/administrate/restund.md @@ -0,0 +1,293 @@ +# Restund (TURN) + +```{eval-rst} +.. include:: includes/intro.rst +``` + +(allocations)= + +## Wire-Server Configuration + +The wire-server can either serve a static list of TURN servers to the clients or +it can discovery them using DNS SRV Records. + +### Static List + +To configure a static list of TURN servers to use, override +`values/wire-server/values.yaml` like this: + +```yaml +# (...) + +brig: +# (...) + turnStatic: + v1: + # v1 entries can be ignored and are not in use anymore since end of 2018. + v2: + - turn:server1.example.com:3478 # server 1 UDP + - turn:server1.example.com:3478?transport=tcp # server 1 TCP + - turns:server1.example.com:5478?transport=tcp # server 1 TLS + - turn:server2.example.com:3478 # server 2 UDP + - turn:server2.example.com:3478?transport=tcp # server 2 TCP + - turns:server2.example.com:5478?transport=tcp # server 2 TLS + turn: + serversSource: files +``` + +### DNS SRV Records + +To configure wire-server to use DNS SRV records in order to discover TURN +servers, override `values/wire-server/values.yaml` like this: + +```yaml +# (...) + +brig: +# (...) + turn: + serversSource: dns + baseDomain: prod.example.com + discoveryIntervalSeconds: 10 +``` + +When configured like this, the wire-server would look for these 3 SRV records +every 10 seconds: + +1. `_turn._udp.prod.example.com` will be used to discover UDP hostnames and port for all the + turn servers. +2. `_turn._tcp.prod.example.com` will be used to discover the TCP hostnames and port for all + the turn servers. +3. `_turns._tcp.prod.example.com` will be used to discover the TLS hostnames and port for + all the turn servers. + +Entries with weight 0 will be ignored. Example: + +``` +dig +retries=3 +short SRV _turn._udp.prod.example.com + +0 0 3478 turn36.prod.example.com +0 10 3478 turn34..prod.example.com +0 10 3478 turn35.prod.example.com +``` + +At least one of these 3 lookups must succeed for the wire-server to be able to +respond correctly when `GET /calls/config/v2` is called. All successful +responses are served in the result. + +In addition, if there are any clients using the legacy endpoint, `GET +/calls/config`, (all versions of all mobile apps since 2018 no longer use this) they will be served by the servers listed in the +`_turn._udp.prod.example.com` SRV record. This endpoint, however, will not +serve the domain names received inside the SRV record, instead it will serve the +first `A` record that is associated with each domain name in the SRV record. + +## How to see how many people are currently connected to the restund server + +You can see the count of currently ongoing calls (also called "allocations"): + +```sh +echo turnstats | nc -u 127.0.0.1 33000 -q1 | grep allocs_cur | cut -d' ' -f2 +``` + +## How to restart restund (with downtime) + +With downtime, it's very easy: + +``` +systemctl restart restund +``` + +```{warning} +Restarting `restund` means any user that is currently connected to it (i.e. having a call) will lose its audio/video connection. If you wish to have no downtime, check the next section\* +``` + +(rebooting-a-restund-node)= + +## Rebooting a Restund node + +If you want to reboot a restund node, you need to make sure the other restund nodes in the cluster are running, so that services are not interrupted by the reboot. + +```{warning} +This procedure as described here will cause downtime, even if a second restund server is up; and kill any ongoing audio/video calls. The sections further up describe a downtime and a no-downtime procedure. +``` + +Presuming your two restund nodes are called: + +- `restund-1` +- `restund-2` + +To prepare for a reboot of `restund-1`, log into the other restund server (`restund-2`, for example here), and make sure the docker service is running. + +List the running containers, to ensure restund is running, by executing: + +```sh +ssh -t sudo docker container ls +``` + +You should see the following in the results: + +```sh +CONTAINER ID IMAGE COMMAND STATUS PORTS NAMES + quay.io/wire/restund:v0.4.16b1.0.53 22 seconds ago Up 18 seconds restund +``` + +Make sure you see this restund container, and it is running ("Up"). + +If it is not, you need to do troubleshooting work, if it is running, you can move forward and reboot restund-1. + +Now log into the restund server you wish to reboot (`restund-1` in this example), and reboot it + +```sh +ssh -t sudo reboot +``` + +Wait at least a minute for the machine to restart, you can use this command to automatically retry SSH access until it is succesful: + +```sh +ssh -o 'ConnectionAttempts 3600' exit +``` + +Then log into the restund server (`restund-1`, in this example), and make sure the docker service is running: + +```sh +ssh -t sudo docker container ls +``` + +```sh +CONTAINER ID IMAGE COMMAND STATUS PORTS NAMES + quay.io/wire/restund:v0.4.16b1.0.53 22 seconds ago Up 18 seconds restund +``` + +Here again, make sure you see a restund container, and it is running ("Up"). + +If it is, you have succesfully reboot the restund server, and can if you need to apply the same procedure to the other restund servers in your cluster. + +## How to restart restund without having downtime + +For maintenance you may need to restart a restund server. + +1. Remove that restund server you want to restart from the list of advertised nodes, by taking it out of the turn server list that brig advertises: + +Go to the place where you store kubernetes configuration for your wire-server installation. This might be a directory on your admin laptop, or a directory on the kubernetes machine. + +If your override configuration (`values/wire-server/values.yaml`) looks like the following: + +```yaml +# (...) + +brig: +# (...) + turnStatic: + v1: + # v1 entries can be ignored and are not in use anymore since end of 2018. + v2: + - turn:server1.example.com:3478 # server 1 UDP + - turn:server1.example.com:3478?transport=tcp # server 1 TCP + - turns:server1.example.com:5478?transport=tcp # server 1 TLS + - turn:server2.example.com:3478 # server 2 UDP + - turn:server2.example.com:3478?transport=tcp # server 2 TCP + - turns:server2.example.com:5478?transport=tcp # server 2 TLS +``` + +And you want to remove server 1, then change the configuration to read + +```yaml +turnStatic: + v2: + - turn:server2.example.com:3478 # server 2 UDP + - turn:server2.example.com:3478?transport=tcp # server 2 TCP + - turns:server2.example.com:5478?transport=tcp # server 2 TLS +``` + +(or comment out lines by adding a `#` in front of the respective line) + +```yaml +turnStatic: + v2: + #- turn:server1.example.com:3478 # server 1 UDP + #- turn:server1.example.com:3478?transport=tcp # server 1 TCP + #- turns:server1.example.com:5478?transport=tcp # server 1 TLS + - turn:server2.example.com:3478 # server 2 UDP + - turn:server2.example.com:3478?transport=tcp # server 2 TCP + - turns:server2.example.com:5478?transport=tcp # server 2 TLS +``` + +Next, apply these changes to configuration with `./bin/prod-setup.sh` + +You then need to restart the `brig` pods if your code is older than September 2019 (otherwise brig will restart itself automatically): + +```bash +kubectl delete pod -l app=brig +``` + +2. Wait for traffic to drain. This can take up to 12 hours after the configuration change. Wait until current allocations (people connected to the restund server) return 0. See {ref}`allocations`. +3. It's now safe to `systemctl stop restund`, and take any necessary actions. +4. `systemctl start restund` and then add the restund server back to configuration of advertised nodes (see step 1, put the server back). + +## How to renew a certificate for restund + +1. Replace the certificate file on the server (under `/etc/restund/restund.pem` usually), either with ansible or manually. Ensure the new certificate file is a concatenation of your whole certificate chain *and* the private key: + +```text +-----BEGIN CERTIFICATE----- +... +-----END CERTIFICATE----- +-----BEGIN CERTIFICATE----- +... +-----END CERTIFICATE----- +-----BEGIN PRIVATE KEY----- +... +-----END PRIVATE KEY----- +``` + +2. Restart restund (see sections above) + +## How to check which restund/TURN servers will be used by clients + +The list of turn servers contacted by clients *should* match what you added to your `turnStatic` configuration. But if you'd like to double-check, here's how: + +Terminal one: + +```sh +kubectl port-forward svc/brig 9999:8080 +``` + +Terminal two: + +```sh +UUID=$(cat /proc/sys/kernel/random/uuid) +curl -s -H "Z-User:$UUID" -H "Z-Connection:anything" "http://localhost:9999/calls/config/v2" | json_pp +``` + +May return something like: + +```json +{ + "ice_servers" : [ + { + "credential" : "ASyFLXqbmg8fuK4chJG3S1Qg4L/nnhpkN0/UctdtTFbGW1AcuuAaOqUMDhm9V2w7zKHY6PPMqjhwKZ2neSE78g==", + "urls" : [ + "turn:turn1.example.com:3478" + ], + "username" : "d=1582157904.v=1.k=0.t=s.r=mbzovplogqxbasbf" + }, + { + "credential" : "ZsxEtGWbpUZ3QWxPZtbX6g53HXu6PWfhhUfGNqRBJjrsly5w9IPAsuAWLEOP7fsoSXF13mgSPROXxMYAB/fQ6g==", + "urls" : [ + "turn:turn1.example.com:3478?transport=tcp" + ], + "username" : "d=1582157904.v=1.k=0.t=s.r=jsafnwtgqhfqjvco" + }, + { + "credential" : "ZsxEtGWbpUZ3QWxPZtbX6g53HXu6PWfhhUfGNqRBJjrsly5w9IPAsuAWLEOP7fsoSXF13mgSPROXxMYAB/fQ6g==", + "urls" : [ + "turns:turn1.example.com:5349?transport=tcp" + ], + "username" : "d=1582157904.v=1.k=0.t=s.r=jsafnwtgqhfqjvco" + } + ], + "ttl" : 3600 +} +``` + +In the above case, there is a single server configured to use UDP on port 3478, plain TCP on port 3478, and TLS over TCP on port 5349. The ordering of the list is random and will change on every request made with curl. diff --git a/docs/src/how-to/administrate/restund.rst b/docs/src/how-to/administrate/restund.rst deleted file mode 100644 index 584066ab43..0000000000 --- a/docs/src/how-to/administrate/restund.rst +++ /dev/null @@ -1,301 +0,0 @@ -Restund (TURN) --------------- - -.. include:: includes/intro.rst - -.. _allocations: - -Wire-Server Configuration -~~~~~~~~~~~~~~~~~~~~~~~~~ - -The wire-server can either serve a static list of TURN servers to the clients or -it can discovery them using DNS SRV Records. - -Static List -+++++++++++ - -To configure a static list of TURN servers to use, override -``values/wire-server/values.yaml`` like this: - -.. code:: yaml - - # (...) - - brig: - # (...) - turnStatic: - v1: - # v1 entries can be ignored and are not in use anymore since end of 2018. - v2: - - turn:server1.example.com:3478 # server 1 UDP - - turn:server1.example.com:3478?transport=tcp # server 1 TCP - - turns:server1.example.com:5478?transport=tcp # server 1 TLS - - turn:server2.example.com:3478 # server 2 UDP - - turn:server2.example.com:3478?transport=tcp # server 2 TCP - - turns:server2.example.com:5478?transport=tcp # server 2 TLS - turn: - serversSource: files - -DNS SRV Records -+++++++++++++++ - -To configure wire-server to use DNS SRV records in order to discover TURN -servers, override ``values/wire-server/values.yaml`` like this: - -.. code:: yaml - - # (...) - - brig: - # (...) - turn: - serversSource: dns - baseDomain: prod.example.com - discoveryIntervalSeconds: 10 - -When configured like this, the wire-server would look for these 3 SRV records -every 10 seconds: - -1. ``_turn._udp.prod.example.com`` will be used to discover UDP hostnames and port for all the - turn servers. -2. ``_turn._tcp.prod.example.com`` will be used to discover the TCP hostnames and port for all - the turn servers. -3. ``_turns._tcp.prod.example.com`` will be used to discover the TLS hostnames and port for - all the turn servers. - -Entries with weight 0 will be ignored. Example: - -.. code:: - - dig +retries=3 +short SRV _turn._udp.prod.example.com - - 0 0 3478 turn36.prod.example.com - 0 10 3478 turn34..prod.example.com - 0 10 3478 turn35.prod.example.com - -At least one of these 3 lookups must succeed for the wire-server to be able to -respond correctly when ``GET /calls/config/v2`` is called. All successful -responses are served in the result. - -In addition, if there are any clients using the legacy endpoint, ``GET -/calls/config``, (all versions of all mobile apps since 2018 no longer use this) they will be served by the servers listed in the -``_turn._udp.prod.example.com`` SRV record. This endpoint, however, will not -serve the domain names received inside the SRV record, instead it will serve the -first ``A`` record that is associated with each domain name in the SRV record. - -How to see how many people are currently connected to the restund server -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -You can see the count of currently ongoing calls (also called "allocations"): - -.. code:: sh - - echo turnstats | nc -u 127.0.0.1 33000 -q1 | grep allocs_cur | cut -d' ' -f2 - -How to restart restund (with downtime) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -With downtime, it's very easy:: - - systemctl restart restund - -.. warning:: - - Restarting ``restund`` means any user that is currently connected to it (i.e. having a call) will lose its audio/video connection. If you wish to have no downtime, check the next section* - -Rebooting a Restund node -~~~~~~~~~~~~~~~~~~~~~~~~ - -If you want to reboot a restund node, you need to make sure the other restund nodes in the cluster are running, so that services are not interrupted by the reboot. - -.. warning:: - - This procedure as described here will cause downtime, even if a second restund server is up; and kill any ongoing audio/video calls. The sections further up describe a downtime and a no-downtime procedure. - -Presuming your two restund nodes are called: - -* ``restund-1`` -* ``restund-2`` - -To prepare for a reboot of ``restund-1``, log into the other restund server (``restund-2``, for example here), and make sure the docker service is running. - -List the running containers, to ensure restund is running, by executing: - -.. code:: sh - - ssh -t sudo docker container ls - -You should see the following in the results: - -.. code:: sh - - CONTAINER ID IMAGE COMMAND STATUS PORTS NAMES - quay.io/wire/restund:v0.4.16b1.0.53 22 seconds ago Up 18 seconds restund - -Make sure you see this restund container, and it is running ("Up"). - -If it is not, you need to do troubleshooting work, if it is running, you can move forward and reboot restund-1. - -Now log into the restund server you wish to reboot (``restund-1`` in this example), and reboot it - -.. code:: sh - - ssh -t sudo reboot - -Wait at least a minute for the machine to restart, you can use this command to automatically retry SSH access until it is succesful: - -.. code:: sh - - ssh -o 'ConnectionAttempts 3600' exit - -Then log into the restund server (``restund-1``, in this example), and make sure the docker service is running: - -.. code:: sh - - ssh -t sudo docker container ls - -.. code:: sh - - CONTAINER ID IMAGE COMMAND STATUS PORTS NAMES - quay.io/wire/restund:v0.4.16b1.0.53 22 seconds ago Up 18 seconds restund - -Here again, make sure you see a restund container, and it is running ("Up"). - -If it is, you have succesfully reboot the restund server, and can if you need to apply the same procedure to the other restund servers in your cluster. - -How to restart restund without having downtime -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -For maintenance you may need to restart a restund server. - -1. Remove that restund server you want to restart from the list of advertised nodes, by taking it out of the turn server list that brig advertises: - -Go to the place where you store kubernetes configuration for your wire-server installation. This might be a directory on your admin laptop, or a directory on the kubernetes machine. - -If your override configuration (``values/wire-server/values.yaml``) looks like the following: - -.. code:: yaml - - # (...) - - brig: - # (...) - turnStatic: - v1: - # v1 entries can be ignored and are not in use anymore since end of 2018. - v2: - - turn:server1.example.com:3478 # server 1 UDP - - turn:server1.example.com:3478?transport=tcp # server 1 TCP - - turns:server1.example.com:5478?transport=tcp # server 1 TLS - - turn:server2.example.com:3478 # server 2 UDP - - turn:server2.example.com:3478?transport=tcp # server 2 TCP - - turns:server2.example.com:5478?transport=tcp # server 2 TLS - -And you want to remove server 1, then change the configuration to read - -.. code:: yaml - - turnStatic: - v2: - - turn:server2.example.com:3478 # server 2 UDP - - turn:server2.example.com:3478?transport=tcp # server 2 TCP - - turns:server2.example.com:5478?transport=tcp # server 2 TLS - -(or comment out lines by adding a ``#`` in front of the respective line) - -.. code:: yaml - - turnStatic: - v2: - #- turn:server1.example.com:3478 # server 1 UDP - #- turn:server1.example.com:3478?transport=tcp # server 1 TCP - #- turns:server1.example.com:5478?transport=tcp # server 1 TLS - - turn:server2.example.com:3478 # server 2 UDP - - turn:server2.example.com:3478?transport=tcp # server 2 TCP - - turns:server2.example.com:5478?transport=tcp # server 2 TLS - -Next, apply these changes to configuration with ``./bin/prod-setup.sh`` - -You then need to restart the ``brig`` pods if your code is older than September 2019 (otherwise brig will restart itself automatically): - -.. code:: bash - - kubectl delete pod -l app=brig - -2. Wait for traffic to drain. This can take up to 12 hours after the configuration change. Wait until current allocations (people connected to the restund server) return 0. See :ref:`allocations`. -3. It's now safe to ``systemctl stop restund``, and take any necessary actions. -4. ``systemctl start restund`` and then add the restund server back to configuration of advertised nodes (see step 1, put the server back). - -How to renew a certificate for restund -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -1. Replace the certificate file on the server (under ``/etc/restund/restund.pem`` usually), either with ansible or manually. Ensure the new certificate file is a concatenation of your whole certificate chain *and* the private key: - -.. code:: text - - -----BEGIN CERTIFICATE----- - ... - -----END CERTIFICATE----- - -----BEGIN CERTIFICATE----- - ... - -----END CERTIFICATE----- - -----BEGIN PRIVATE KEY----- - ... - -----END PRIVATE KEY----- - - -2. Restart restund (see sections above) - - -How to check which restund/TURN servers will be used by clients -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The list of turn servers contacted by clients *should* match what you added to your `turnStatic` configuration. But if you'd like to double-check, here's how: - -Terminal one: - -.. code:: sh - - kubectl port-forward svc/brig 9999:8080 - -Terminal two: - -.. code:: sh - - UUID=$(cat /proc/sys/kernel/random/uuid) - curl -s -H "Z-User:$UUID" -H "Z-Connection:anything" "http://localhost:9999/calls/config/v2" | json_pp - - -May return something like: - -.. code:: json - - { - "ice_servers" : [ - { - "credential" : "ASyFLXqbmg8fuK4chJG3S1Qg4L/nnhpkN0/UctdtTFbGW1AcuuAaOqUMDhm9V2w7zKHY6PPMqjhwKZ2neSE78g==", - "urls" : [ - "turn:turn1.example.com:3478" - ], - "username" : "d=1582157904.v=1.k=0.t=s.r=mbzovplogqxbasbf" - }, - { - "credential" : "ZsxEtGWbpUZ3QWxPZtbX6g53HXu6PWfhhUfGNqRBJjrsly5w9IPAsuAWLEOP7fsoSXF13mgSPROXxMYAB/fQ6g==", - "urls" : [ - "turn:turn1.example.com:3478?transport=tcp" - ], - "username" : "d=1582157904.v=1.k=0.t=s.r=jsafnwtgqhfqjvco" - }, - { - "credential" : "ZsxEtGWbpUZ3QWxPZtbX6g53HXu6PWfhhUfGNqRBJjrsly5w9IPAsuAWLEOP7fsoSXF13mgSPROXxMYAB/fQ6g==", - "urls" : [ - "turns:turn1.example.com:5349?transport=tcp" - ], - "username" : "d=1582157904.v=1.k=0.t=s.r=jsafnwtgqhfqjvco" - } - ], - "ttl" : 3600 - } - -In the above case, there is a single server configured to use UDP on port 3478, plain TCP on port 3478, and TLS over TCP on port 5349. The ordering of the list is random and will change on every request made with curl. - diff --git a/docs/src/how-to/administrate/users.md b/docs/src/how-to/administrate/users.md new file mode 100644 index 0000000000..b1ec7d1c69 --- /dev/null +++ b/docs/src/how-to/administrate/users.md @@ -0,0 +1,590 @@ +(investigative-tasks)= + +# Investigative tasks (e.g. searching for users as server admin) + +This page requires that you have root access to the machines where kubernetes runs on, or have kubernetes permissions allowing you to port-forward arbitrary pods and services. + +If you have the `backoffice` pod installed, see also the [backoffice README](https://github.com/wireapp/wire-server/tree/develop/charts/backoffice). + +If you don't have `backoffice`, see below for some options: + +## Manually searching for users in cassandra + +Terminal one: + +```sh +kubectl port-forward svc/brig 9999:8080 +``` + +Terminal two: Search for your user by email: + +```sh +EMAIL=user@example.com +curl -v -G localhost:9999/i/users --data-urlencode email=$EMAIL; echo +# or, for nicer formatting +curl -v -G localhost:9999/i/users --data-urlencode email=$EMAIL | json_pp +``` + +You can also search by `handle` (unique username) or by phone: + +```sh +HANDLE=user123 +curl -v -G localhost:9999/i/users --data-urlencode handles=$HANDLE; echo + +PHONE=+490000000000000 # phone numbers must have the +country prefix and no spaces +curl -v -G localhost:9999/i/users --data-urlencode phone=$PHONE; echo +``` + +Which should give you output like: + +```json +[ + { + "managed_by" : "wire", + "assets" : [ + { + "key" : "3-2-a749af8d-a17b-4445-b360-46c93fc41bc6", + "size" : "preview", + "type" : "image" + }, + { + "size" : "complete", + "type" : "image", + "key" : "3-2-6cac6b57-9972-4aba-acbb-f078bc538b54" + } + ], + "picture" : [], + "accent_id" : 0, + "status" : "active", + "name" : "somename", + "email" : "user@example.com", + "id" : "9122e5de-b4fb-40fa-99ad-1b5d7d07bae5", + "locale" : "en", + "handle" : "user123" + } +] +``` + +The interesting part is the `id` (in the example case `9122e5de-b4fb-40fa-99ad-1b5d7d07bae5`): + +(user-deletion)= + +## Deleting a user which is not a team user + +The following will completely delete a user, its conversations, assets, etc. The only thing remaining will be an entry in cassandra indicating that this user existed in the past (only the UUID remains, all other attributes like name etc are purged) + +You can now delete that user by double-checking that the user you wish to delete is really the correct user: + +```sh +# replace the id with the id of the user you want to delete +curl -v localhost:9999/i/users/9122e5de-b4fb-40fa-99ad-1b5d7d07bae5 -XDELETE +``` + +Afterwards, the previous command (to search for a user in cassandra) should return an empty list (`[]`). + +When done, on terminal 1, ctrl+c to cancel the port-forwarding. + +## Searching and deleting users with no team + +If you require users to be part of a team, or for some other reason you need to delete all users who are not part of a team, you need to first find all such users, and then delete them. + +To find users that are not part of a team, first you need to connect via SSH to the machine where cassandra is running, and then run the following command: + +```sh +cqlsh 9042 -e "select team, handle, id from brig.user" | grep -E "^\s+null" +``` + +This will give you a list of handles and IDs with no team associated: + +```sh +null | null | bc22119f-ce11-4402-aa70-307a58fb22ec +null | tom | 8ecee3d0-47a4-43ff-977b-40a4fc350fed +null | alice | 2a4c3468-c1e6-422f-bc4d-4aeff47941ac +null | null | 1b5ca44a-aeb4-4a68-861b-48612438c4cc +null | bob | 701b4eab-6df2-476d-a818-90dc93e8446e +``` + +You can then {ref}`delete each user with these instructions `. + +## Manual search on elasticsearch (via brig, recommended) + +This should only be necessary in the case of some (suspected) data inconsistency between cassandra and elasticsearch. + +Terminal one: + +```sh +kubectl port-forward svc/brig 9999:8080 +``` + +Terminal two: Search for your user by name or handle or a prefix of that handle or name: + +```sh +NAMEORPREFIX=test7 +UUID=$(cat /proc/sys/kernel/random/uuid) +curl -H "Z-User:$UUID" "http://localhost:9999/search/contacts?q=$NAMEORPREFIX"; echo +# or, for pretty output: +curl -H "Z-User:$UUID" "http://localhost:9999/search/contacts?q=$NAMEORPREFIX" | json_pp +``` + +If no match is found, expect a query like this: + +```json +{"took":91,"found":0,"documents":[],"returned":0} +``` + +If matches are found, the result should look like this: + +```json +{ + "found" : 2, + "documents" : [ + { + "id" : "dbdbf370-48b3-4e1e-b377-76d7d4cbb4f2", + "name" : "Test", + "handle" : "test7", + "accent_id" : 7 + }, + { + "name" : "Test", + "accent_id" : 0, + "handle" : "test7476", + "id" : "a93240b0-ba89-441e-b8ee-ff4403808f93" + } + ], + "returned" : 2, + "took" : 4 +} +``` + +## How to manually search for a user on elasticsearh directly (not recommended) + +First, ssh to an elasticsearch instance. + +```sh +ssh +``` + +Then run the following: + +```sh +PREFIX=... +curl -s "http://localhost:9200/directory/_search?q=$PREFIX" | json_pp +``` + +The `id` (UUID) returned can be used when deleting (see below). + +## How to manually delete a user from elasticsearch only + +```{warning} +This is NOT RECOMMENDED. Be sure you know what you're doing. This only deletes the user from elasticsearch, but not from cassandra. Any change of e.g. the username or displayname of that user means this user will re-appear in the elasticsearch database. Instead, either fully delete a user: {ref}`user-deletion` or make use of the internal GET/PUT `/i/searchable` endpoint on brig to make this user prefix-unsearchable. +``` + +If, despite the warning, you wish to continue? + +First, ssh to an elasticsearch instance: + +```sh +ssh +``` + +Next, check that the user exists: + +```sh +UUID=... +curl -s "http://localhost:9200/directory/user/$UUID" | json_pp +``` + +That should return a `"found": true`, like this: + +```json +{ + "_type" : "user", + "_version" : 1575998428262000, + "_id" : "b3e9e445-fb02-47f3-bac0-63f5f680d258", + "found" : true, + "_index" : "directory", + "_source" : { + "normalized" : "Mr Test", + "handle" : "test12345", + "id" : "b3e9e445-fb02-47f3-bac0-63f5f680d258", + "name" : "Mr Test", + "accent_id" : 1 + } +} +``` + +Then delete it: + +```sh +UUID=... +curl -s -XDELETE "http://localhost:9200/directory/user/$UUID" | json_pp +``` + +## Mass-invite users to a team + +If you need to invite members to a specific given team, you can use the `create_team_members.sh` Bash script, located [here](https://github.com/wireapp/wire-server/blob/develop/hack/bin/create_team_members.sh). + +This script does not create users or causes them to join a team by itself, instead, it sends invites to potential users via email, and when users accept the invitation, they create their account, set their password, and are added to the team as team members. + +Input is a [CSV file](https://en.wikipedia.org/wiki/Comma-separated_values), in comma-separated format, in the form `'Email,Suggested User Name'`. + +You also need to specify the inviting admin user, the team, the URI for the Brig ([API](https://docs.wire.com/understand/federation/api.html?highlight=brig)) service (Host), and finally the input (CSV) file containing the users to invite. + +The exact format for the parameters passed to the script is [as follows](https://github.com/wireapp/wire-server/blob/develop/hack/bin/create_team_members.sh#L17): + +- `-a `: [User ID](https://docs.wire.com/understand/federation/api.html?highlight=user%20id#qualified-identifiers-and-names) in [UUID format](https://en.wikipedia.org/wiki/Universally_unique_identifier) of the inviting admin. For example `9122e5de-b4fb-40fa-99ad-1b5d7d07bae5`. +- `-t `: ID of the inviting team, same format. +- `-h `: Base URI of brig's internal endpoint. +- `-c `: file containing info on the invitees in format 'Email,UserName'. + +For example, one such execution of the script could look like: + +```sh +sh create_team_members.sh -a 9122e5de-b4fb-40fa-99ad-1b5d7d07bae5 -t 123e4567-e89b-12d3-a456-426614174000 -h http://localhost:9999 -c users_to_invite.csv +``` + +Note: the '' implies you are running the 'kubectl port-forward' given at the top of this document +. +Once the script is run, invitations will be sent to each user in the file every second until all invitations have been sent. + +If you have a lot of invitations to send and this is too slow, you can speed things up by commenting [this line](https://github.com/wireapp/wire-server/blob/develop/hack/bin/create_team_members.sh#L91). + +## How to obtain logs from an Android client to investigate issues + +Wire clients communicate with Wire servers (backend). + +Sometimes to investigate server issues, you (or the Wire team) will need client information, in the form of client logs. + +In order to obtain client logs on the Android Wire client, follow this procedure: + +- Open the Wire app (client) on your Android device +- Click on the round user icon in the top left of the screen, leading to your user Profile. +- Click on "Settings" at the bottom of the screen +- Click on "Advanced" in the menu +- Check/activate "Collect usage data" +- Now go back to using your client normally, so usage data is generated. If you have been asked to follow a specific testing regime, or log a specific problem, this is the time to do so. +- Once enough usage data is generated, go back to the "Advanced" screen (User profile > Settings > Advanced) +- Click on "Create debug report" +- A menu will open allowing you to share the debug report, you can now save it or send it via email/any other means to the Wire team. + +## How to obtain logs from an iOS client to investigate issues + +Wire clients communicate with Wire servers (backend). + +Sometimes to investigate server issues, you (or the Wire team) will need client information, in the form of client logs. + +In order to obtain client logs on the iOS Wire client, follow this procedure: + +- Open the Wire app (client) on your iOS device +- Click on the round user icon in the top left of the screen, leading to your user Profile. +- Click on "Settings" at the bottom of the screen +- Click on "Advanced" in the menu +- Check/activate "Collect usage data" +- Now go back to using your client normally, so usage data is generated. If you have been asked to follow a specific testing regime, or log a specific problem, this is the time to do so. +- Once enough usage data is generated, go back to the "Advanced" screen (User profile > Settings > Advanced) +- Click on "Send report to wire" +- A menu will open to share the debug report via email, allowing you to send it to the Wire team. + +## How to retrieve metric values manually + +Metric values are sets of data points about services, such as status and other measures, that can be retrieved at specific endpoints, typically by a monitoring system (such as Prometheus) for monitoring, diagnosis and graphing. + +Sometimes, you will want to manually obtain this data that is normally automatically grabbed by Prometheus. + +Some of the pods allow you to grab metrics by accessing their `/i/metrics` endpoint, in particular: + +- `brig`: User management API +- `cannon`: WebSockets API +- `cargohold`: Assets storage API +- `galley`: Conversations and Teams API +- `gundeck`: Push Notifications API +- `spar`: Single-Sign-ON and SCIM + +For more details on the various services/pods, you can check out {ref}`this link `. + +Before you can grab metrics from a pod, you need to find its IP address. You do this by running the following command: + +```sh +d kubectl get pods -owide +``` + +(this presumes you are already in your normal Wire environment, which you obtain by running `source ./bin/offline-env.sh`) + +Which will give you an output that looks something like this: + +``` +demo@Ubuntu-1804-bionic-64-minimal:~/Wire-Server$ d kubectl get pods -owide +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +account-pages-784f9b547c-cp444 1/1 Running 0 6d23h 10.233.113.5 kubenode3 +brig-746ddc55fd-6pltz 1/1 Running 0 6d23h 10.233.110.11 kubenode2 +brig-746ddc55fd-d59dw 1/1 Running 0 6d4h 10.233.110.23 kubenode2 +brig-746ddc55fd-zp7jl 1/1 Running 0 6d23h 10.233.113.10 kubenode3 +brig-index-migrate-data-45rm7 0/1 Completed 0 6d23h 10.233.110.9 kubenode2 +cannon-0 1/1 Running 0 3h1m 10.233.119.41 kubenode1 +cannon-1 1/1 Running 0 3h1m 10.233.113.47 kubenode3 +cannon-2 1/1 Running 0 3h1m 10.233.110.51 kubenode2 +cargohold-65bff97fc6-8b9ls 1/1 Running 0 6d4h 10.233.113.20 kubenode3 +cargohold-65bff97fc6-bkx6x 1/1 Running 0 6d23h 10.233.113.4 kubenode3 +cargohold-65bff97fc6-tz8fh 1/1 Running 0 6d23h 10.233.110.5 kubenode2 +cassandra-migrations-bjsdz 0/1 Completed 0 6d23h 10.233.110.3 kubenode2 +demo-smtp-784ddf6989-vmj7t 1/1 Running 0 6d23h 10.233.113.2 kubenode3 +elasticsearch-index-create-7r8g4 0/1 Completed 0 6d23h 10.233.110.4 kubenode2 +fake-aws-sns-6c7c4b7479-wfp82 2/2 Running 0 6d4h 10.233.110.27 kubenode2 +fake-aws-sqs-59fbfbcbd4-n4c5z 2/2 Running 0 6d23h 10.233.110.2 kubenode2 +galley-7c89c44f7b-nm2rr 1/1 Running 0 6d23h 10.233.110.8 kubenode2 +galley-7c89c44f7b-tdxz4 1/1 Running 0 6d23h 10.233.113.6 kubenode3 +galley-7c89c44f7b-tr8pm 1/1 Running 0 6d4h 10.233.110.29 kubenode2 +galley-migrate-data-g66rz 0/1 Completed 0 6d23h 10.233.110.13 kubenode2 +gundeck-7fd75c7c5f-jb8xq 1/1 Running 0 6d23h 10.233.110.6 kubenode2 +gundeck-7fd75c7c5f-lbth9 1/1 Running 0 6d23h 10.233.113.8 kubenode3 +gundeck-7fd75c7c5f-wvcw6 1/1 Running 0 6d4h 10.233.113.23 kubenode3 +nginz-5cdd8b588b-dbn86 2/2 Running 16 6d23h 10.233.113.11 kubenode3 +nginz-5cdd8b588b-gk6rw 2/2 Running 14 6d23h 10.233.110.12 kubenode2 +nginz-5cdd8b588b-jvznt 2/2 Running 11 6d4h 10.233.113.21 kubenode3 +reaper-6957694667-s5vz5 1/1 Running 0 6d4h 10.233.110.26 kubenode2 +redis-ephemeral-master-0 1/1 Running 0 6d23h 10.233.113.3 kubenode3 +spar-56d77f85f6-bw55q 1/1 Running 0 6d23h 10.233.113.9 kubenode3 +spar-56d77f85f6-mczzd 1/1 Running 0 6d4h 10.233.110.28 kubenode2 +spar-56d77f85f6-vvvfq 1/1 Running 0 6d23h 10.233.110.7 kubenode2 +spar-migrate-data-ts4sx 0/1 Completed 0 6d23h 10.233.110.14 kubenode2 +team-settings-fbbb899c-qxx7m 1/1 Running 0 6d4h 10.233.110.24 kubenode2 +webapp-d97869795-grnft 1/1 Running 0 6d4h 10.233.110.25 kubenode2 +``` + +Here presuming we need to get metrics from `gundeck`, we can see the IP of one of the gundeck pods is `10.233.110.6`. + +We can therefore connect to node `kubenode2` on which this pod runs with `ssh kubenode2.your-domain.com`, and run the following: + +```sh +curl 10.233.110.6:8080/i/metrics +``` + +Alternatively, if you don't want to, or can't for some reason, connect to kubenode2, you can use port redirect instead: + +```sh +# Allow Gundeck to be reached via the port 7777 +kubectl --kubeconfig kubeconfig.dec -n wire port-forward service/gundeck 7777:8080 +# Reach Gundeck directly at port 7777 using curl, output resulting data to stdout/terminal +curl -v http://127.0.0.1:7777/i/metrics +``` + +Output will look something like this (truncated): + +```sh +# HELP gc_seconds_wall Wall clock time spent on last GC +# TYPE gc_seconds_wall gauge +gc_seconds_wall 5481304.0 +# HELP gc_seconds_cpu CPU time spent on last GC +# TYPE gc_seconds_cpu gauge +gc_seconds_cpu 5479828.0 +# HELP gc_bytes_used_current Number of bytes in active use as of the last GC +# TYPE gc_bytes_used_current gauge +gc_bytes_used_current 1535232.0 +# HELP gc_bytes_used_max Maximum amount of memory living on the heap after the last major GC +# TYPE gc_bytes_used_max gauge +gc_bytes_used_max 2685312.0 +# HELP gc_bytes_allocated_total Bytes allocated since the start of the server +# TYPE gc_bytes_allocated_total gauge +gc_bytes_allocated_total 4.949156056e9 +``` + +This example is for Gundeck, but you can also get metrics for other services. All k8s services are listed at {ref}`this link `. + +This is an example adapted for Cannon: + +```sh +kubectl --kubeconfig kubeconfig.dec -n wire port-forward service/cannon 7777:8080 +curl -v http://127.0.0.1:7777/i/metrics +``` + +In the output of this command, `net_websocket_clients` is roughly the number of connected clients. + +(reset-session-cookies)= + +## Reset session cookies + +Remove session cookies on your system to force users to login again within the next 15 minutes (or whenever they come back online): + +```{warning} +This will cause interruptions to ongoing calls and should be timed properly. +``` + +### Reset cookies of all users + +```sh +ssh +# from the ssh session +cqlsh +# from the cqlsh shell +truncate brig.user_cookies; +``` + +### Reset cookies for a defined list of users + +```sh +ssh +# within the ssh session +cqlsh +# within the cqlsh shell: delete all users by userId +delete from brig.user_cookies where user in (c0d64244-8ab4-11ec-8fda-37788be3a4e2, ...); +``` + +(Keep reading if you want to find out which users on your system are using SSO.) + +(identify-sso-users)= + +## Identify all users using SSO + +Collect all teams configured with an IdP: + +```sh +ssh +# within the ssh session start cqlsh +cqlsh +# within the cqlsh shell export all teams with idp +copy spar.idp (team) TO 'teams_with_idp.csv' with header=false; +``` + +Close the session and proceed locally: + +```sh +# download csv file +scp :teams_with_idp.csv . +# convert to a single line, comma separated list +tr '\n' ',' < teams_with_idp.csv; echo +``` + +And use this list to get all team members in these teams: + +```sh +ssh +# within the ssh session start cqlsh +cqlsh +# within the cqlsh shell select all members of previous identified teams +# should look like this: f2207d98-8ab3-11ec-b689-07fc1fd409c9, ... +select user from galley.team_member where team in (); +# alternatively, export the list of all users (for filterling locally in eg. excel) +copy galley.team_member (user, team, sso_id) TO 'users_with_idp.csv' with header=true; +``` + +Close the session and proceed locally to generate the list of all users from teams with IdP: + +```sh +# download csv file +scp :users_with_idp.csv . +# convert to a single line, comma separated list +tr '\n' ',' < users_with_idp.csv; echo +``` + +```{note} +Don't forget to dellete the created csv files after you have downloaded/processed them. +``` + +## Create a team using the SCIM API + +If you need to create a team manually, maybe because team creation was blocked in the "teams" interface, follow this procedure: + +First download or locate this bash script: `wire-server/hack/bin/create_test_team_scim.sh ` + +Then, run it the following way: + +```sh +./create_test_team_scim.sh -h -s +``` + +Where: + +- In `-h `, replace `` with the base URL for your brig host (for example: `https://brig-host.your-domain.com`, defaults to `http://localhost:8082`) +- In `-s `, replace `` with the base URL for your spar host (for example: `https://spar-host.your-domain.com`, defaults to `http://localhost:8088`) + +You might also need to edit the admin email and admin passwords at lines `48` and `49` of the script. + +To learn more about the different pods and how to identify them, see `this page`. + +You can list your pods with `kubectl get pods --namespace wire`. + +Alternatively, you can run the series of commands manually with `curl`, like this: + +```sh +curl -i -s --show-error \ + -XPOST "$BRIG_HOST/i/users" \ + -H'Content-type: application/json' \ + -d'{"email":"$ADMIN_EMAIL","password":"$ADMIN_PASSWORD","name":"$NAME_OF_TEAM","team":{"name":"$NAME_OF_TEAM","icon":"default"}}' +``` + +Where: + +- `$BRIG_HOST` is the base URL for your brig host +- `$ADMIN_EMAIL` is the email for the admin account for the new team +- `$ADMIN_PASSWORD` is the password for the admin account for the new team +- `$NAME_OF_TEAM` is the name of the team newly created + +Out of the result of this command, you will be able to extract an `Admin UUID`, and a `Team UUID`, which you will need later. + +Then run: + +```sh +curl -X POST \ + --header 'Content-Type: application/json' \ + --header 'Accept: application/json' \ + -d '{"email":"$ADMIN_EMAIL","password":"$ADMIN_PASSWORD"}' \ + $BRIG_HOST/login'?persist=false' | jq -r .access_token +``` + +Where the values to replace are the same as the command above. + +This command should output an access token, take note of it. + +Then run: + +```sh +curl -X POST \ + --header "Authorization: Bearer $ACCESS_TOKEN" \ + --header 'Content-Type: application/json;charset=utf-8' \ + --header 'Z-User: '"$ADMIN_UUID" \ + -d '{ "description": "test '"`date`"'", "password": "'"$ADMIN_PASSWORD"'" }' \ + $SPAR_HOST/scim/auth-tokens +``` + +Where the values to replace are the same as the first command, plus `$ACCESS_TOKEN` is access token you just took note of in the previous command. + +Out of the JSON output of this command, you should be able to extract: + +- A SCIM token (`token` value in the JSON). +- A SCIM token ID (`id` value in the `info` value in the JSON) + +Equiped with those tokens, we move on to the next script, `wire-server/hack/bin/create_team.sh ` + +This script can be run the following way: + +```sh +./create_team.sh -h -o -e -p -v -t -c +``` + +Where: + +- -h \: Base URI of brig. default: `http://localhost:8080` +- -o \: user display name of the owner of the team to be created. default: "owner name n/a" +- -e \: email address of the owner of the team to be created. default: "owner email n/a" +- -p \: owner password. default: "owner pass n/a" +- -v \: validation code received by email after running the previous script/commands. default: "email code n/a" +- -t \: default: "team name n/a" +- -c \: default: "USD" + +Alternatively, you can manually run the command: + +```sh +curl -i -s --show-error \ + -XPOST "$BRIG_HOST/register" \ + -H'Content-type: application/json' \ + -d'{"name":"$OWNER_NAME","email":"$OWNER_EMAIL","password":"$OWNER_PASSWORD","email_code":"$EMAIL_CODE","team":{"currency":"$TEAM_CURRENCY","icon":"default","name":"$TEAM_NAME"}}' +``` + +Where: + +- `$BRIG_HOST` is the base URL for your brig service +- `$OWNER_NAME` is the name of the of the team to be created +- `$OWNER_PASSWORD` is the password of the owner of the team to be created +- `$EMAIL_CODE` is the validation code received by email after running the previous script/command +- `$TEAM_CURRENCY` is the currency of the team +- `$TEAM_NAME` is the name of the team diff --git a/docs/src/how-to/administrate/users.rst b/docs/src/how-to/administrate/users.rst deleted file mode 100644 index e7d1e856dc..0000000000 --- a/docs/src/how-to/administrate/users.rst +++ /dev/null @@ -1,609 +0,0 @@ -.. _investigative_tasks: - -Investigative tasks (e.g. searching for users as server admin) ---------------------------------------------------------------- - -This page requires that you have root access to the machines where kubernetes runs on, or have kubernetes permissions allowing you to port-forward arbitrary pods and services. - -If you have the `backoffice` pod installed, see also the `backoffice README `__. - -If you don't have `backoffice`, see below for some options: - -Manually searching for users in cassandra -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Terminal one: - -.. code:: sh - - kubectl port-forward svc/brig 9999:8080 - -Terminal two: Search for your user by email: - -.. code:: sh - - EMAIL=user@example.com - curl -v -G localhost:9999/i/users --data-urlencode email=$EMAIL; echo - # or, for nicer formatting - curl -v -G localhost:9999/i/users --data-urlencode email=$EMAIL | json_pp - -You can also search by ``handle`` (unique username) or by phone: - -.. code:: sh - - HANDLE=user123 - curl -v -G localhost:9999/i/users --data-urlencode handles=$HANDLE; echo - - PHONE=+490000000000000 # phone numbers must have the +country prefix and no spaces - curl -v -G localhost:9999/i/users --data-urlencode phone=$PHONE; echo - - -Which should give you output like: - -.. code:: json - - [ - { - "managed_by" : "wire", - "assets" : [ - { - "key" : "3-2-a749af8d-a17b-4445-b360-46c93fc41bc6", - "size" : "preview", - "type" : "image" - }, - { - "size" : "complete", - "type" : "image", - "key" : "3-2-6cac6b57-9972-4aba-acbb-f078bc538b54" - } - ], - "picture" : [], - "accent_id" : 0, - "status" : "active", - "name" : "somename", - "email" : "user@example.com", - "id" : "9122e5de-b4fb-40fa-99ad-1b5d7d07bae5", - "locale" : "en", - "handle" : "user123" - } - ] - -The interesting part is the ``id`` (in the example case ``9122e5de-b4fb-40fa-99ad-1b5d7d07bae5``): - -.. _user-deletion: - -Deleting a user which is not a team user -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The following will completely delete a user, its conversations, assets, etc. The only thing remaining will be an entry in cassandra indicating that this user existed in the past (only the UUID remains, all other attributes like name etc are purged) - -You can now delete that user by double-checking that the user you wish to delete is really the correct user: - -.. code:: sh - - # replace the id with the id of the user you want to delete - curl -v localhost:9999/i/users/9122e5de-b4fb-40fa-99ad-1b5d7d07bae5 -XDELETE - -Afterwards, the previous command (to search for a user in cassandra) should return an empty list (``[]``). - -When done, on terminal 1, ctrl+c to cancel the port-forwarding. - -Searching and deleting users with no team -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -If you require users to be part of a team, or for some other reason you need to delete all users who are not part of a team, you need to first find all such users, and then delete them. - -To find users that are not part of a team, first you need to connect via SSH to the machine where cassandra is running, and then run the following command: - -.. code:: sh - - cqlsh 9042 -e "select team, handle, id from brig.user" | grep -E "^\s+null" - -This will give you a list of handles and IDs with no team associated: - -.. code:: sh - - null | null | bc22119f-ce11-4402-aa70-307a58fb22ec - null | tom | 8ecee3d0-47a4-43ff-977b-40a4fc350fed - null | alice | 2a4c3468-c1e6-422f-bc4d-4aeff47941ac - null | null | 1b5ca44a-aeb4-4a68-861b-48612438c4cc - null | bob | 701b4eab-6df2-476d-a818-90dc93e8446e - -You can then `delete each user with these instructions <./users.html#deleting-a-user-which-is-not-a-team-user>`__. - -Manual search on elasticsearch (via brig, recommended) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This should only be necessary in the case of some (suspected) data inconsistency between cassandra and elasticsearch. - -Terminal one: - -.. code:: sh - - kubectl port-forward svc/brig 9999:8080 - -Terminal two: Search for your user by name or handle or a prefix of that handle or name: - -.. code:: sh - - NAMEORPREFIX=test7 - UUID=$(cat /proc/sys/kernel/random/uuid) - curl -H "Z-User:$UUID" "http://localhost:9999/search/contacts?q=$NAMEORPREFIX"; echo - # or, for pretty output: - curl -H "Z-User:$UUID" "http://localhost:9999/search/contacts?q=$NAMEORPREFIX" | json_pp - -If no match is found, expect a query like this: - -.. code:: json - - {"took":91,"found":0,"documents":[],"returned":0} - -If matches are found, the result should look like this: - -.. code:: json - - { - "found" : 2, - "documents" : [ - { - "id" : "dbdbf370-48b3-4e1e-b377-76d7d4cbb4f2", - "name" : "Test", - "handle" : "test7", - "accent_id" : 7 - }, - { - "name" : "Test", - "accent_id" : 0, - "handle" : "test7476", - "id" : "a93240b0-ba89-441e-b8ee-ff4403808f93" - } - ], - "returned" : 2, - "took" : 4 - } - -How to manually search for a user on elasticsearh directly (not recommended) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -First, ssh to an elasticsearch instance. - -.. code:: sh - - ssh - -Then run the following: - -.. code:: sh - - PREFIX=... - curl -s "http://localhost:9200/directory/_search?q=$PREFIX" | json_pp - -The `id` (UUID) returned can be used when deleting (see below). - -How to manually delete a user from elasticsearch only -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. warning:: - - This is NOT RECOMMENDED. Be sure you know what you're doing. This only deletes the user from elasticsearch, but not from cassandra. Any change of e.g. the username or displayname of that user means this user will re-appear in the elasticsearch database. Instead, either fully delete a user: :ref:`user-deletion` or make use of the internal GET/PUT ``/i/searchable`` endpoint on brig to make this user prefix-unsearchable. - -If, despite the warning, you wish to continue? - -First, ssh to an elasticsearch instance: - -.. code:: sh - - ssh - -Next, check that the user exists: - -.. code:: sh - - UUID=... - curl -s "http://localhost:9200/directory/user/$UUID" | json_pp - -That should return a ``"found": true``, like this: - -.. code:: json - - { - "_type" : "user", - "_version" : 1575998428262000, - "_id" : "b3e9e445-fb02-47f3-bac0-63f5f680d258", - "found" : true, - "_index" : "directory", - "_source" : { - "normalized" : "Mr Test", - "handle" : "test12345", - "id" : "b3e9e445-fb02-47f3-bac0-63f5f680d258", - "name" : "Mr Test", - "accent_id" : 1 - } - } - - -Then delete it: - -.. code:: sh - - UUID=... - curl -s -XDELETE "http://localhost:9200/directory/user/$UUID" | json_pp - -Mass-invite users to a team -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -If you need to invite members to a specific given team, you can use the ``create_team_members.sh`` Bash script, located `here `__. - -This script does not create users or causes them to join a team by itself, instead, it sends invites to potential users via email, and when users accept the invitation, they create their account, set their password, and are added to the team as team members. - -Input is a `CSV file `__, in comma-separated format, in the form ``'Email,Suggested User Name'``. - -You also need to specify the inviting admin user, the team, the URI for the Brig (`API `__) service (Host), and finally the input (CSV) file containing the users to invite. - -The exact format for the parameters passed to the script is `as follows `__: - -* ``-a ``: `User ID `__ in `UUID format `__ of the inviting admin. For example ``9122e5de-b4fb-40fa-99ad-1b5d7d07bae5``. -* ``-t ``: ID of the inviting team, same format. -* ``-h ``: Base URI of brig's internal endpoint. -* ``-c ``: file containing info on the invitees in format 'Email,UserName'. - -For example, one such execution of the script could look like: - -.. code:: sh - - sh create_team_members.sh -a 9122e5de-b4fb-40fa-99ad-1b5d7d07bae5 -t 123e4567-e89b-12d3-a456-426614174000 -h http://localhost:9999 -c users_to_invite.csv - -Note: the 'http://localhost:9999' implies you are running the 'kubectl port-forward' given at the top of this document -. -Once the script is run, invitations will be sent to each user in the file every second until all invitations have been sent. - -If you have a lot of invitations to send and this is too slow, you can speed things up by commenting `this line `__. - - -How to obtain logs from an Android client to investigate issues -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Wire clients communicate with Wire servers (backend). - -Sometimes to investigate server issues, you (or the Wire team) will need client information, in the form of client logs. - -In order to obtain client logs on the Android Wire client, follow this procedure: - -* Open the Wire app (client) on your Android device -* Click on the round user icon in the top left of the screen, leading to your user Profile. -* Click on "Settings" at the bottom of the screen -* Click on "Advanced" in the menu -* Check/activate "Collect usage data" -* Now go back to using your client normally, so usage data is generated. If you have been asked to follow a specific testing regime, or log a specific problem, this is the time to do so. -* Once enough usage data is generated, go back to the "Advanced" screen (User profile > Settings > Advanced) -* Click on "Create debug report" -* A menu will open allowing you to share the debug report, you can now save it or send it via email/any other means to the Wire team. - - -How to obtain logs from an iOS client to investigate issues -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Wire clients communicate with Wire servers (backend). - -Sometimes to investigate server issues, you (or the Wire team) will need client information, in the form of client logs. - -In order to obtain client logs on the iOS Wire client, follow this procedure: - -* Open the Wire app (client) on your iOS device -* Click on the round user icon in the top left of the screen, leading to your user Profile. -* Click on "Settings" at the bottom of the screen -* Click on "Advanced" in the menu -* Check/activate "Collect usage data" -* Now go back to using your client normally, so usage data is generated. If you have been asked to follow a specific testing regime, or log a specific problem, this is the time to do so. -* Once enough usage data is generated, go back to the "Advanced" screen (User profile > Settings > Advanced) -* Click on "Send report to wire" -* A menu will open to share the debug report via email, allowing you to send it to the Wire team. - -How to retrieve metric values manually -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Metric values are sets of data points about services, such as status and other measures, that can be retrieved at specific endpoints, typically by a monitoring system (such as Prometheus) for monitoring, diagnosis and graphing. - -Sometimes, you will want to manually obtain this data that is normally automatically grabbed by Prometheus. - -Some of the pods allow you to grab metrics by accessing their ``/i/metrics`` endpoint, in particular: - -* ``brig``: User management API -* ``cannon``: WebSockets API -* ``cargohold``: Assets storage API -* ``galley``: Conversations and Teams API -* ``gundeck``: Push Notifications API -* ``spar``: Single-Sign-ON and SCIM - -For more details on the various services/pods, you can check out `this link <../../understand/overview.html?highlight=gundeck#focus-on-pods>`. - -Before you can grab metrics from a pod, you need to find its IP address. You do this by running the following command: - -.. code:: sh - - d kubectl get pods -owide - -(this presumes you are already in your normal Wire environment, which you obtain by running ``source ./bin/offline-env.sh``) - -Which will give you an output that looks something like this: - -.. code:: - - demo@Ubuntu-1804-bionic-64-minimal:~/Wire-Server$ d kubectl get pods -owide - NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES - account-pages-784f9b547c-cp444 1/1 Running 0 6d23h 10.233.113.5 kubenode3 - brig-746ddc55fd-6pltz 1/1 Running 0 6d23h 10.233.110.11 kubenode2 - brig-746ddc55fd-d59dw 1/1 Running 0 6d4h 10.233.110.23 kubenode2 - brig-746ddc55fd-zp7jl 1/1 Running 0 6d23h 10.233.113.10 kubenode3 - brig-index-migrate-data-45rm7 0/1 Completed 0 6d23h 10.233.110.9 kubenode2 - cannon-0 1/1 Running 0 3h1m 10.233.119.41 kubenode1 - cannon-1 1/1 Running 0 3h1m 10.233.113.47 kubenode3 - cannon-2 1/1 Running 0 3h1m 10.233.110.51 kubenode2 - cargohold-65bff97fc6-8b9ls 1/1 Running 0 6d4h 10.233.113.20 kubenode3 - cargohold-65bff97fc6-bkx6x 1/1 Running 0 6d23h 10.233.113.4 kubenode3 - cargohold-65bff97fc6-tz8fh 1/1 Running 0 6d23h 10.233.110.5 kubenode2 - cassandra-migrations-bjsdz 0/1 Completed 0 6d23h 10.233.110.3 kubenode2 - demo-smtp-784ddf6989-vmj7t 1/1 Running 0 6d23h 10.233.113.2 kubenode3 - elasticsearch-index-create-7r8g4 0/1 Completed 0 6d23h 10.233.110.4 kubenode2 - fake-aws-sns-6c7c4b7479-wfp82 2/2 Running 0 6d4h 10.233.110.27 kubenode2 - fake-aws-sqs-59fbfbcbd4-n4c5z 2/2 Running 0 6d23h 10.233.110.2 kubenode2 - galley-7c89c44f7b-nm2rr 1/1 Running 0 6d23h 10.233.110.8 kubenode2 - galley-7c89c44f7b-tdxz4 1/1 Running 0 6d23h 10.233.113.6 kubenode3 - galley-7c89c44f7b-tr8pm 1/1 Running 0 6d4h 10.233.110.29 kubenode2 - galley-migrate-data-g66rz 0/1 Completed 0 6d23h 10.233.110.13 kubenode2 - gundeck-7fd75c7c5f-jb8xq 1/1 Running 0 6d23h 10.233.110.6 kubenode2 - gundeck-7fd75c7c5f-lbth9 1/1 Running 0 6d23h 10.233.113.8 kubenode3 - gundeck-7fd75c7c5f-wvcw6 1/1 Running 0 6d4h 10.233.113.23 kubenode3 - nginz-5cdd8b588b-dbn86 2/2 Running 16 6d23h 10.233.113.11 kubenode3 - nginz-5cdd8b588b-gk6rw 2/2 Running 14 6d23h 10.233.110.12 kubenode2 - nginz-5cdd8b588b-jvznt 2/2 Running 11 6d4h 10.233.113.21 kubenode3 - reaper-6957694667-s5vz5 1/1 Running 0 6d4h 10.233.110.26 kubenode2 - redis-ephemeral-master-0 1/1 Running 0 6d23h 10.233.113.3 kubenode3 - spar-56d77f85f6-bw55q 1/1 Running 0 6d23h 10.233.113.9 kubenode3 - spar-56d77f85f6-mczzd 1/1 Running 0 6d4h 10.233.110.28 kubenode2 - spar-56d77f85f6-vvvfq 1/1 Running 0 6d23h 10.233.110.7 kubenode2 - spar-migrate-data-ts4sx 0/1 Completed 0 6d23h 10.233.110.14 kubenode2 - team-settings-fbbb899c-qxx7m 1/1 Running 0 6d4h 10.233.110.24 kubenode2 - webapp-d97869795-grnft 1/1 Running 0 6d4h 10.233.110.25 kubenode2 - -Here presuming we need to get metrics from ``gundeck``, we can see the IP of one of the gundeck pods is ``10.233.110.6``. - -We can therefore connect to node ``kubenode2`` on which this pod runs with ``ssh kubenode2.your-domain.com``, and run the following: - -.. code:: sh - - curl 10.233.110.6:8080/i/metrics - -Alternatively, if you don't want to, or can't for some reason, connect to kubenode2, you can use port redirect instead: - -.. code:: sh - - # Allow Gundeck to be reached via the port 7777 - kubectl --kubeconfig kubeconfig.dec -n wire port-forward service/gundeck 7777:8080 - # Reach Gundeck directly at port 7777 using curl, output resulting data to stdout/terminal - curl -v http://127.0.0.1:7777/i/metrics - -Output will look something like this (truncated): - -.. code:: sh - - # HELP gc_seconds_wall Wall clock time spent on last GC - # TYPE gc_seconds_wall gauge - gc_seconds_wall 5481304.0 - # HELP gc_seconds_cpu CPU time spent on last GC - # TYPE gc_seconds_cpu gauge - gc_seconds_cpu 5479828.0 - # HELP gc_bytes_used_current Number of bytes in active use as of the last GC - # TYPE gc_bytes_used_current gauge - gc_bytes_used_current 1535232.0 - # HELP gc_bytes_used_max Maximum amount of memory living on the heap after the last major GC - # TYPE gc_bytes_used_max gauge - gc_bytes_used_max 2685312.0 - # HELP gc_bytes_allocated_total Bytes allocated since the start of the server - # TYPE gc_bytes_allocated_total gauge - gc_bytes_allocated_total 4.949156056e9 - -This example is for Gundeck, but you can also get metrics for other services. All k8s services are listed at `this link <../../understand/overview.html?highlight=gundeck#focus-on-pods>`__. - -This is an example adapted for Cannon: - -.. code:: sh - - kubectl --kubeconfig kubeconfig.dec -n wire port-forward service/cannon 7777:8080 - curl -v http://127.0.0.1:7777/i/metrics - -In the output of this command, ``net_websocket_clients`` is roughly the number of connected clients. - -.. _reset session cookies: - -Reset session cookies -~~~~~~~~~~~~~~~~~~~~~ - -Remove session cookies on your system to force users to login again within the next 15 minutes (or whenever they come back online): - -.. warning:: - This will cause interruptions to ongoing calls and should be timed properly. - -Reset cookies of all users -^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. code:: sh - - ssh - # from the ssh session - cqlsh - # from the cqlsh shell - truncate brig.user_cookies; - -Reset cookies for a defined list of users -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. code:: sh - - ssh - # within the ssh session - cqlsh - # within the cqlsh shell: delete all users by userId - delete from brig.user_cookies where user in (c0d64244-8ab4-11ec-8fda-37788be3a4e2, ...); - -(Keep reading if you want to find out which users on your system are using SSO.) - -.. _identify sso users: - -Identify all users using SSO -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Collect all teams configured with an IdP: - -.. code:: sh - - ssh - # within the ssh session start cqlsh - cqlsh - # within the cqlsh shell export all teams with idp - copy spar.idp (team) TO 'teams_with_idp.csv' with header=false; - -Close the session and proceed locally: - -.. code:: sh - - # download csv file - scp :teams_with_idp.csv . - # convert to a single line, comma separated list - tr '\n' ',' < teams_with_idp.csv; echo - -And use this list to get all team members in these teams: - -.. code:: sh - - ssh - # within the ssh session start cqlsh - cqlsh - # within the cqlsh shell select all members of previous identified teams - # should look like this: f2207d98-8ab3-11ec-b689-07fc1fd409c9, ... - select user from galley.team_member where team in (); - # alternatively, export the list of all users (for filterling locally in eg. excel) - copy galley.team_member (user, team, sso_id) TO 'users_with_idp.csv' with header=true; - -Close the session and proceed locally to generate the list of all users from teams with IdP: - -.. code:: sh - - # download csv file - scp :users_with_idp.csv . - # convert to a single line, comma separated list - tr '\n' ',' < users_with_idp.csv; echo - - -.. note:: - Don't forget to dellete the created csv files after you have downloaded/processed them. - -Create a team using the SCIM API -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -If you need to create a team manually, maybe because team creation was blocked in the "teams" interface, follow this procedure: - -First download or locate this bash script: `wire-server/hack/bin/create_test_team_scim.sh ` - -Then, run it the following way: - -.. code:: sh - - ./create_test_team_scim.sh -h -s - -Where: - -* In `-h `, replace `` with the base URL for your brig host (for example: `https://brig-host.your-domain.com`, defaults to `http://localhost:8082`) -* In `-s `, replace `` with the base URL for your spar host (for example: `https://spar-host.your-domain.com`, defaults to `http://localhost:8088`) - -You might also need to edit the admin email and admin passwords at lines `48` and `49` of the script. - -To learn more about the different pods and how to identify them, see `this page`. - -You can list your pods with `kubectl get pods --namespace wire`. - -Alternatively, you can run the series of commands manually with `curl`, like this: - -.. code:: sh - - curl -i -s --show-error \ - -XPOST "$BRIG_HOST/i/users" \ - -H'Content-type: application/json' \ - -d'{"email":"$ADMIN_EMAIL","password":"$ADMIN_PASSWORD","name":"$NAME_OF_TEAM","team":{"name":"$NAME_OF_TEAM","icon":"default"}}' - -Where: - -* `$BRIG_HOST` is the base URL for your brig host -* `$ADMIN_EMAIL` is the email for the admin account for the new team -* `$ADMIN_PASSWORD` is the password for the admin account for the new team -* `$NAME_OF_TEAM` is the name of the team newly created - -Out of the result of this command, you will be able to extract an `Admin UUID`, and a `Team UUID`, which you will need later. - -Then run: - -.. code:: sh - - curl -X POST \ - --header 'Content-Type: application/json' \ - --header 'Accept: application/json' \ - -d '{"email":"$ADMIN_EMAIL","password":"$ADMIN_PASSWORD"}' \ - $BRIG_HOST/login'?persist=false' | jq -r .access_token - -Where the values to replace are the same as the command above. - -This command should output an access token, take note of it. - -Then run: - -.. code:: sh - - curl -X POST \ - --header "Authorization: Bearer $ACCESS_TOKEN" \ - --header 'Content-Type: application/json;charset=utf-8' \ - --header 'Z-User: '"$ADMIN_UUID" \ - -d '{ "description": "test '"`date`"'", "password": "'"$ADMIN_PASSWORD"'" }' \ - $SPAR_HOST/scim/auth-tokens - -Where the values to replace are the same as the first command, plus `$ACCESS_TOKEN` is access token you just took note of in the previous command. - -Out of the JSON output of this command, you should be able to extract: - -* A SCIM token (`token` value in the JSON). -* A SCIM token ID (`id` value in the `info` value in the JSON) - -Equiped with those tokens, we move on to the next script, `wire-server/hack/bin/create_team.sh ` - -This script can be run the following way: - -.. code:: sh - - ./create_team.sh -h -o -e -p -v -t -c - -Where: - -* -h : Base URI of brig. default: `http://localhost:8080` -* -o : user display name of the owner of the team to be created. default: "owner name n/a" -* -e : email address of the owner of the team to be created. default: "owner email n/a" -* -p : owner password. default: "owner pass n/a" -* -v : validation code received by email after running the previous script/commands. default: "email code n/a" -* -t : default: "team name n/a" -* -c : default: "USD" - -Alternatively, you can manually run the command: - -.. code:: sh - - curl -i -s --show-error \ - -XPOST "$BRIG_HOST/register" \ - -H'Content-type: application/json' \ - -d'{"name":"$OWNER_NAME","email":"$OWNER_EMAIL","password":"$OWNER_PASSWORD","email_code":"$EMAIL_CODE","team":{"currency":"$TEAM_CURRENCY","icon":"default","name":"$TEAM_NAME"}}' - -Where: - -* `$BRIG_HOST` is the base URL for your brig service -* `$OWNER_NAME` is the name of the of the team to be created -* `$OWNER_PASSWORD` is the password of the owner of the team to be created -* `$EMAIL_CODE` is the validation code received by email after running the previous script/command -* `$TEAM_CURRENCY` is the currency of the team -* `$TEAM_NAME` is the name of the team diff --git a/docs/src/how-to/associate/custom-backend-for-desktop-client.md b/docs/src/how-to/associate/custom-backend-for-desktop-client.md new file mode 100644 index 0000000000..ad66ca975e --- /dev/null +++ b/docs/src/how-to/associate/custom-backend-for-desktop-client.md @@ -0,0 +1,79 @@ +# How to connect the desktop application to a custom backend + +## Introduction + +This page explains how to connect the Wire desktop client to a custom Backend, which can be done either via a start-up parameter or via an initialization file. + +## Prerequisites + +Install Wire either from the App Store, or download it from our website at () + +Have a running Wire backend in your infrastructure/cloud. + +Note down the full URL of the webapp served by that backend (e.g. ) + +## Using start-up parameters + +### Windows + +- Create a shortcut to the Wire application +- Edit the shortcut ( Right click > Properties ) +- Add the following command line parameters to the shortcut: `--env {URL}`, where `{URL}` is the URL of your webapp as noted down above + +### MacOS + +To create the application + +- Open Automator +- Click New application +- Add the "Run shell script" phase +- Type in the script panel the following command: `open -b com.wearezeta.zclient.mac --args --env {URL}`, where `{URL}` is the URL of your webapp as noted down above +- Save the application from Automator (e.g. on your desktop or in Application) +- To run the application: Just open the application you created in the first step + +### Linux + +- Open a Terminal +- Start the application with the command line arguments: `--env {URL}`, where `{URL}` is the URL of your webapp as noted down above + +## Using an initialization file + +By providing an initialization file the instance connection parameters and/or proxy settings for the Wire desktop application can be pre-configured. This requires Wire version >= 3.27. + +Create a file named `init.json` and set `customWebAppURL` and optionally `proxyServerURL` e.g. as follows: + +```json +{ + "customWebAppURL": "https://app.custom-wire.com", + "env": "CUSTOM", + "proxyServerURL": "http://127.0.0.1:3128", +} +``` + +The `env` setting must be set to `CUSTOM` for this to work. + +```{note} +Consult your site admin to learn what goes into these settings. The value of `customWebAppURL` can be found [here](https://github.com/wireapp/wire-server/blob/e6aa50913cdcfde1200114787baabf7896394a2f/charts/webapp/templates/deployment.yaml#L40-L41) or [resp. here](https://github.com/wireapp/wire-server/blob/e6aa50913cdcfde1200114787baabf7896394a2f/charts/webapp/values.yaml#L26). The value of `proxyServerURL` is your browser proxy. It depends on the configuration of the network your client is running in. +``` + +### Windows + +Move the `init.json` file to `%APPDATA%\Wire\config\init.json` if it does not already exist. Otherwise update it accordingly. + +### MacOS + +Move the `init.json` file to + +``` +~/Library/Containers/com.wearezeta.zclient.mac/Data/Library/Application\ Support/Wire/config/init.json +``` + +if it does not already exist. Otherwise, update it accordingly. + +### Linux + +On Linux the `init.json` file should be located in the following directory: + +``` +$HOME/.config/Wire/config/init.json +``` diff --git a/docs/src/how-to/associate/custom-backend-for-desktop-client.rst b/docs/src/how-to/associate/custom-backend-for-desktop-client.rst deleted file mode 100644 index 6f4f768345..0000000000 --- a/docs/src/how-to/associate/custom-backend-for-desktop-client.rst +++ /dev/null @@ -1,90 +0,0 @@ -How to connect the desktop application to a custom backend -========================================================== - -Introduction ------------- - -This page explains how to connect the Wire desktop client to a custom Backend, which can be done either via a start-up parameter or via an initialization file. - -Prerequisites --------------- - -Install Wire either from the App Store, or download it from our website at (https://wire.com/en/download/) - -Have a running Wire backend in your infrastructure/cloud. - -Note down the full URL of the webapp served by that backend (e.g. https://app.custom-wire.com ) - -Using start-up parameters -------------------------- - -Windows -~~~~~~~ - -- Create a shortcut to the Wire application -- Edit the shortcut ( Right click > Properties ) -- Add the following command line parameters to the shortcut: `--env {URL}`, where `{URL}` is the URL of your webapp as noted down above - -MacOS -~~~~~ - -To create the application - -- Open Automator -- Click New application -- Add the "Run shell script" phase -- Type in the script panel the following command: `open -b com.wearezeta.zclient.mac --args --env {URL}`, where `{URL}` is the URL of your webapp as noted down above -- Save the application from Automator (e.g. on your desktop or in Application) -- To run the application: Just open the application you created in the first step - -Linux -~~~~~ - -- Open a Terminal -- Start the application with the command line arguments: `--env {URL}`, where `{URL}` is the URL of your webapp as noted down above - -Using an initialization file ----------------------------- - -By providing an initialization file the instance connection parameters and/or proxy settings for the Wire desktop application can be pre-configured. This requires Wire version >= 3.27. - -Create a file named ``init.json`` and set ``customWebAppURL`` and optionally ``proxyServerURL`` e.g. as follows: - -.. code-block:: json - - { - "customWebAppURL": "https://app.custom-wire.com", - "env": "CUSTOM", - "proxyServerURL": "http://127.0.0.1:3128", - } - -The ``env`` setting must be set to ``CUSTOM`` for this to work. - -.. note:: - - Consult your site admin to learn what goes into these settings. The value of ``customWebAppURL`` can be found `here `_ or `resp. here `_. The value of ``proxyServerURL`` is your browser proxy. It depends on the configuration of the network your client is running in. - -Windows -~~~~~~~ - -Move the ``init.json`` file to ``%APPDATA%\Wire\config\init.json`` if it does not already exist. Otherwise update it accordingly. - -MacOS -~~~~~ - -Move the ``init.json`` file to - -:: - - ~/Library/Containers/com.wearezeta.zclient.mac/Data/Library/Application\ Support/Wire/config/init.json - -if it does not already exist. Otherwise, update it accordingly. - -Linux -~~~~~ - -On Linux the ``init.json`` file should be located in the following directory: - -:: - - $HOME/.config/Wire/config/init.json diff --git a/docs/src/how-to/associate/custom-certificates.rst b/docs/src/how-to/associate/custom-certificates.md similarity index 80% rename from docs/src/how-to/associate/custom-certificates.rst rename to docs/src/how-to/associate/custom-certificates.md index 3a5c15b852..2c52391570 100644 --- a/docs/src/how-to/associate/custom-certificates.rst +++ b/docs/src/how-to/associate/custom-certificates.md @@ -1,10 +1,9 @@ -Custom root certificates -------------------------- +# Custom root certificates In case you have installed wire-server using certificates signed using a custom root CA (certificate authority) which is not trusted by default by browsers and systems, then you need to ensure Wire-clients (on Android, Desktop, iOS, and the Web) trust this root certificate. The following details the procedure for Desktop and Web on Linux/Windows: -https://thomas-leister.de/en/how-to-import-ca-root-certificate/ + For Android and iOS, if you know how to trust custom certificates, please let use know so we can update this documentation. diff --git a/docs/src/how-to/associate/deeplink.md b/docs/src/how-to/associate/deeplink.md new file mode 100644 index 0000000000..d3bfc77e27 --- /dev/null +++ b/docs/src/how-to/associate/deeplink.md @@ -0,0 +1,174 @@ +# Using a Deep Link to connect an App to a Custom Backend + +## Introduction + +Once you have your own wire-server set up and configured, you may want to use a client other than the web interface (webapp). There are a few ways to accomplish this: + +- **Using a Deep Link** (which this page is all about) +- Registering your backend instance with the hosted SaaS backend for re-direction. For which you might need to talk to the folks @ Wire (the company). + +Assumptions: + +- You have wire-server installed and working +- You have a familiarity with JSON files +- You can place a JSON file on an HTTPS supporting web server somewhere your users can reach. + +Supported client apps: + +- iOS +- Android + +```{note} +Wire deeplinks can be used to redirect a mobile (Android, iOS) Wire app to a specific backend URL. Deeplinks have no further ability implemented at this stage. +``` + +## Connecting to a custom backend utilizing a Deep Link + +A deep link is a special link a user can click on after installing wire, but before setting it up. This link instructs their wire client to connect to your wire-server, rather than wire.com. + +### With Added Proxy + +In addition to connect to a custom backend a user can specify a socks proxy to add another layer to the network and make the api calls go through the proxy. + +## From a user's perspective: + +1. First, a user installs the app from the store +2. The user clicks on a deep link, which is formatted similar to: `wire://access/?config=https://eu-north2.mycustomdomain.de/configs/backend1.json` (notice the protocol prefix: `wire://`) +3. The app will ask the user to confirm that they want to connect to a custom backend. If the user cancels, the app exits. +4. Assuming the user did not cancel, the app will download the file `eu-north2.mycustomdomain.de/configs/backend1.json` via HTTPS. If it can't download the file or the file doesn't match the expected structure, the wire client will display an error message (*'sInvalid link'*). +5. The app will memorize the various hosts (REST, websocket, team settings, website, support) specified in the JSON and use those when talking to your backend. +6. In the welcome page of the app, a "pill" (header) is shown at the top, to remind the user that they are now on a custom backend. A button "Show more" shows the URL of where the configuration was fetched from. + +### With Added Proxy + +In addition to the previous points + +7. The app will remember the (proxy host, proxy port, if the proxy need authentication) +8. In the login page the user will see new section to add the proxy credentials if the proxy need authentication + +## From the administrator's (your) perspective: + +You need to host two static files, then let your users know how to connect. There are three options listed (in order of recommendation) for hosting the static files. + +Note on the meaning of the URLs used below: + +`backendURL` + +: Use the backend API entrypoint URL, by convention `https://nginz-https.` + +`backendWSURL` + +: Use the backend Websocket API entrypoint URL, by convention `https://nginz-ssl.` + +`teamsURL` + +: Use the URL to the team settings part of the webapp, by convention `https://teams.` + +`accountsURL` + +: Use the URL to the account pages part of the webapp, by convention `https://account.` + +`blackListURL` + +: is used to disable old versions of Wire clients (mobile apps). It's a prefix URL to which e.g. `/ios` or `/android` is appended. Example URL for the wire.com production servers: `https://clientblacklist.wire.com/prod` and example json files: [android](https://clientblacklist.wire.com/prod/android) and [iPhone](https://clientblacklist.wire.com/prod/ios) . + +`websiteURL` + +: Is used as a basis for a few links within the app pointing to FAQs and troubleshooting pages for end users. You can leave this as `https://wire.com` or host your own alternative pages and point this to your own website with the equivalent pages references from within the app. + +`title` + +: Arbitrary string that may show up in a few places in the app. Should be used as an identifier of the backend servers in question. + +### With Added Proxy + +`apiProxy:host (optional)` + +: Is used to specify a proxy to be added to the network engine, so the API calls will go through it to add more security layer. + +`apiProxy:port (optional)` + +: Is used to specify the port number for the proxy when we create the proxy object in the network layer. + +`apiProxy:needsAuthentication (optional)` + +: Is used to specify if the proxy need an authentication, so we can show the section during the login to enter the proxy credentials. + +#### Host a deeplink together with your Wire installation + +As of release `2.117.0` from `2021-10-29` (see `release notes`), you can configure your deeplink endpoints to match your installation and DNS records (see explanations above) + +```yaml +# override values for wire-server +# (e.g. under ./helm_vars/wire-server/values.yaml) +nginz: + nginx_conf: + deeplink: + endpoints: + backendURL: "https://nginz-https.example.com" + backendWSURL: "https://nginz-ssl.example.com" + teamsURL: "https://teams.example.com" + accountsURL: "https://account.example.com" + blackListURL: "https://clientblacklist.wire.com/prod" + websiteURL: "https://wire.com" + apiProxy: # (optional) + host: "socks5.proxy.com" + port: 1080 + needsAuthentication: true + title: "My Custom Wire Backend" +``` + +(As with any configuration changes, you need to apply them following your usual way of updating configuration (e.g. 'helm upgrade...')) + +Now both static files should become accessible at the backend domain under `/deeplink.json` and `deeplink.html`: + +- `https://nginz-https./deeplink.json` +- `https://nginz-https./deeplink.html` + +#### Host a deeplink using minio (deprecated) + +*If possible, prefer the option in the subsection above or below. This subsection is kept for backwards compatibility.* + +**If you're using minio** installed using the ansible code from [wire-server-deploy](https://github.com/wireapp/wire-server-deploy/blob/master/ansible/), then the [minio ansible playbook](https://github.com/wireapp/wire-server-deploy/blob/master/ansible/minio.yml#L75-L88) (make sure to override these variables) creates a json and a html file in the right format, and makes it accessible at `https://assets./public/deeplink.json` and at `https://assets./public/deeplink.html` + +#### Host a deeplink file using your own web server + +Otherwise you need to create a `.json` file, and host it somewhere users can get to. This `.json` file needs to specify the URLs of your backend. For the production wire server that we host, the JSON would look like: + +```json +{ + "endpoints" : { + "backendURL" : "https://prod-nginz-https.wire.com", + "backendWSURL" : "https://prod-nginz-ssl.wire.com", + "blackListURL" : "https://clientblacklist.wire.com/prod", + "teamsURL" : "https://teams.wire.com", + "accountsURL" : "https://accounts.wire.com", + "websiteURL" : "https://wire.com" + }, + "apiProxy" : { + "host" : "socks5.proxy.com", + "port" : 1080, + "needsAuthentication" : true + }, + "title" : "Production" +} +``` + +**IMPORTANT NOTE:** Clients require **ALL** keys to be present in the JSON file; if some of these keys are irrelevant to your installation (e.g., you don't have a websiteURL) you can leave these values as indicated in the above example. + +There is no requirement for these hosts to be consistent, e.g. the REST endpoint could be `wireapp.pineapple.com` and the team setting `teams.banana.com`. If you have been following this documentation closely, these hosts will likely be consistent in naming, regardless. + +You now need to get a link referring to that `.json` file to your users, prepended with `wire://access/?config=`. For example, you can save the above `.json` file as `https://example.com/wire.json`, and save the following HTML content as `https://example.com/wire.html`: + +```html + + + + link + + +``` + +## Next steps + +Now, you can e.g. email or otherwise provide a link to the deeplink HTML page to your users on their mobile devices, and they can follow the above procedure, by clicking on `link`. diff --git a/docs/src/how-to/associate/deeplink.rst b/docs/src/how-to/associate/deeplink.rst deleted file mode 100644 index d46d6750a3..0000000000 --- a/docs/src/how-to/associate/deeplink.rst +++ /dev/null @@ -1,174 +0,0 @@ -Using a Deep Link to connect an App to a Custom Backend -======================================================= - -Introduction ------------- - -Once you have your own wire-server set up and configured, you may want to use a client other than the web interface (webapp). There are a few ways to accomplish this: - -- **Using a Deep Link** (which this page is all about) -- Registering your backend instance with the hosted SaaS backend for re-direction. For which you might need to talk to the folks @ Wire (the company). - -Assumptions: - -- You have wire-server installed and working -- You have a familiarity with JSON files -- You can place a JSON file on an HTTPS supporting web server somewhere your users can reach. - -Supported client apps: - -- iOS -- Android - -.. note:: - Wire deeplinks can be used to redirect a mobile (Android, iOS) Wire app to a specific backend URL. Deeplinks have no further ability implemented at this stage. - -Connecting to a custom backend utilizing a Deep Link ----------------------------------------------------- - -A deep link is a special link a user can click on after installing wire, but before setting it up. This link instructs their wire client to connect to your wire-server, rather than wire.com. - -With Added Proxy -~~~~~~~~~~~~~~~~ -In addition to connect to a custom backend a user can specify a socks proxy to add another layer to the network and make the api calls go through the proxy. - -From a user's perspective: --------------------------- - -1. First, a user installs the app from the store -2. The user clicks on a deep link, which is formatted similar to: ``wire://access/?config=https://eu-north2.mycustomdomain.de/configs/backend1.json`` (notice the protocol prefix: ``wire://``) -3. The app will ask the user to confirm that they want to connect to a custom backend. If the user cancels, the app exits. -4. Assuming the user did not cancel, the app will download the file ``eu-north2.mycustomdomain.de/configs/backend1.json`` via HTTPS. If it can't download the file or the file doesn't match the expected structure, the wire client will display an error message (*'sInvalid link'*). -5. The app will memorize the various hosts (REST, websocket, team settings, website, support) specified in the JSON and use those when talking to your backend. -6. In the welcome page of the app, a "pill" (header) is shown at the top, to remind the user that they are now on a custom backend. A button "Show more" shows the URL of where the configuration was fetched from. - -With Added Proxy -~~~~~~~~~~~~~~~~ -In addition to the previous points - -7. The app will remember the (proxy host, proxy port, if the proxy need authentication) -8. In the login page the user will see new section to add the proxy credentials if the proxy need authentication - - -From the administrator's (your) perspective: --------------------------------------------- - -You need to host two static files, then let your users know how to connect. There are three options listed (in order of recommendation) for hosting the static files. - -Note on the meaning of the URLs used below: - -``backendURL`` - Use the backend API entrypoint URL, by convention ``https://nginz-https.`` - -``backendWSURL`` - Use the backend Websocket API entrypoint URL, by convention ``https://nginz-ssl.`` - -``teamsURL`` - Use the URL to the team settings part of the webapp, by convention ``https://teams.`` - -``accountsURL`` - Use the URL to the account pages part of the webapp, by convention ``https://account.`` - -``blackListURL`` - is used to disable old versions of Wire clients (mobile apps). It's a prefix URL to which e.g. `/ios` or `/android` is appended. Example URL for the wire.com production servers: ``https://clientblacklist.wire.com/prod`` and example json files: `android `_ and `iPhone `_ . - -``websiteURL`` - Is used as a basis for a few links within the app pointing to FAQs and troubleshooting pages for end users. You can leave this as ``https://wire.com`` or host your own alternative pages and point this to your own website with the equivalent pages references from within the app. - -``title`` - Arbitrary string that may show up in a few places in the app. Should be used as an identifier of the backend servers in question. - -With Added Proxy -~~~~~~~~~~~~~~~~ - -``apiProxy:host (optional)`` - Is used to specify a proxy to be added to the network engine, so the API calls will go through it to add more security layer. - -``apiProxy:port (optional)`` - Is used to specify the port number for the proxy when we create the proxy object in the network layer. - -``apiProxy:needsAuthentication (optional)`` - Is used to specify if the proxy need an authentication, so we can show the section during the login to enter the proxy credentials. - -Host a deeplink together with your Wire installation -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -As of release ``2.117.0`` from ``2021-10-29`` (see `release notes`), you can configure your deeplink endpoints to match your installation and DNS records (see explanations above) - -.. code:: yaml - - # override values for wire-server - # (e.g. under ./helm_vars/wire-server/values.yaml) - nginz: - nginx_conf: - deeplink: - endpoints: - backendURL: "https://nginz-https.example.com" - backendWSURL: "https://nginz-ssl.example.com" - teamsURL: "https://teams.example.com" - accountsURL: "https://account.example.com" - blackListURL: "https://clientblacklist.wire.com/prod" - websiteURL: "https://wire.com" - apiProxy: (optional) - host: "https://socks5.proxy.com" - port: 1080 - needsAuthentication: true - title: "My Custom Wire Backend" - -(As with any configuration changes, you need to apply them following your usual way of updating configuration (e.g. 'helm upgrade...')) - -Now both static files should become accessible at the backend domain under ``/deeplink.json`` and ``deeplink.html``: - -* ``https://nginz-https./deeplink.json`` -* ``https://nginz-https./deeplink.html`` - -Host a deeplink using minio (deprecated) -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -*If possible, prefer the option in the subsection above or below. This subsection is kept for backwards compatibility.* - -**If you're using minio** installed using the ansible code from `wire-server-deploy `__, then the `minio ansible playbook `__ (make sure to override these variables) creates a json and a html file in the right format, and makes it accessible at ``https://assets./public/deeplink.json`` and at ``https://assets./public/deeplink.html`` - -Host a deeplink file using your own web server -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Otherwise you need to create a ``.json`` file, and host it somewhere users can get to. This ``.json`` file needs to specify the URLs of your backend. For the production wire server that we host, the JSON would look like: - -.. code:: json - - { - "endpoints" : { - "backendURL" : "https://prod-nginz-https.wire.com", - "backendWSURL" : "https://prod-nginz-ssl.wire.com", - "blackListURL" : "https://clientblacklist.wire.com/prod", - "teamsURL" : "https://teams.wire.com", - "accountsURL" : "https://accounts.wire.com", - "websiteURL" : "https://wire.com" - }, - "apiProxy" : { - "host" : "https://socks5.proxy.com", - "port" : 1080, - "needsAuthentication" : true - }, - "title" : "Production" - } - -**IMPORTANT NOTE:** Clients require **ALL** keys to be present in the JSON file; if some of these keys are irrelevant to your installation (e.g., you don't have a websiteURL) you can leave these values as indicated in the above example. - -There is no requirement for these hosts to be consistent, e.g. the REST endpoint could be `wireapp.pineapple.com` and the team setting `teams.banana.com`. If you have been following this documentation closely, these hosts will likely be consistent in naming, regardless. - -You now need to get a link referring to that ``.json`` file to your users, prepended with ``wire://access/?config=``. For example, you can save the above ``.json`` file as ``https://example.com/wire.json``, and save the following HTML content as ``https://example.com/wire.html``: - -.. code:: html - - - - - link - - - -Next steps ----------- - -Now, you can e.g. email or otherwise provide a link to the deeplink HTML page to your users on their mobile devices, and they can follow the above procedure, by clicking on ``link``. diff --git a/docs/src/how-to/associate/index.md b/docs/src/how-to/associate/index.md new file mode 100644 index 0000000000..3dba99c8f2 --- /dev/null +++ b/docs/src/how-to/associate/index.md @@ -0,0 +1,10 @@ +# Connecting Wire Clients + +```{toctree} +:glob: true +:maxdepth: 2 + + How to associate a wire client to a custom backend using a deep link + How to use custom root certificates with wire clients + How to use a custom backend with the desktop client +``` diff --git a/docs/src/how-to/associate/index.rst b/docs/src/how-to/associate/index.rst deleted file mode 100644 index 95c7d790f5..0000000000 --- a/docs/src/how-to/associate/index.rst +++ /dev/null @@ -1,10 +0,0 @@ -Connecting Wire Clients -======================= - -.. toctree:: - :maxdepth: 2 - :glob: - - How to associate a wire client to a custom backend using a deep link - How to use custom root certificates with wire clients - How to use a custom backend with the desktop client diff --git a/docs/src/how-to/index.rst b/docs/src/how-to/index.rst deleted file mode 100644 index 1ba77e0302..0000000000 --- a/docs/src/how-to/index.rst +++ /dev/null @@ -1,21 +0,0 @@ -Administrator's Guide -===================== - -Documentation on the installation, deployment and administration of Wire -server components. - -.. warning:: - - If you already installed Wire by using ``poetry``, please refer to the - `old version `__ of - the installation guide. - - -.. toctree:: - :maxdepth: 2 - :glob: - - How to install wire-server - How to verify your wire-server installation - How to administrate servers after successful installation - How to connect the public wire clients to your wire-server installation diff --git a/docs/src/how-to/install/ansible-VMs.md b/docs/src/how-to/install/ansible-VMs.md new file mode 100644 index 0000000000..2627eea5d9 --- /dev/null +++ b/docs/src/how-to/install/ansible-VMs.md @@ -0,0 +1,275 @@ +(ansible-vms)= + +# Installing kubernetes and databases on VMs with ansible + +## Introduction + +In a production environment, some parts of the wire-server +infrastructure (such as e.g. cassandra databases) are best configured +outside kubernetes. Additionally, kubernetes can be rapidly set up with +kubespray, via ansible. This section covers installing VMs with ansible. + +## Assumptions + +- A bare-metal setup (no cloud provider) +- All machines run ubuntu 18.04 +- All machines have static IP addresses +- Time on all machines is being kept in sync +- You have the following virtual machines: + +```{eval-rst} +.. include:: includes/vm-table.rst +``` + +(It's up to you how you create these machines - kvm on a bare metal +machine, VM on a cloud provider, real physical machines, etc.) + +## Preparing to run ansible + +(adding-ips-to-hostsini)= + +% TODO: section header unifications/change + +### Adding IPs to hosts.ini + +Go to your checked-out wire-server-deploy/ansible folder: + +``` +cd wire-server-deploy/ansible +``` + +Copy the example hosts file: + +``` +cp hosts.example.ini hosts.ini +``` + +- Edit the hosts.ini, setting the permanent IPs of the hosts you are + setting up wire on. +- On each of the lines declaring a database service node ( + lines in the `[all]` section beginning with cassandra, elasticsearch, + or minio) replace the `ansible_host` values (`X.X.X.X`) with the + IPs of the nodes that you can connect to via SSH. these are the + 'internal' addresses of the machines, not what a client will be + connecting to. +- On each of the lines declaring a kubernetes node (lines in the `[all]` + section starting with 'kubenode') replace the `ip` values + (`Y.Y.Y.Y`) with the IPs which you wish kubernetes to provide + services to clients on, and replace the `ansible_host` values + (`X.X.X.X`) with the IPs of the nodes that you can connect to via + SSH. If the IP you want to provide services on is the same IP that + you use to connect, remove the `ip=Y.Y.Y.Y` completely. +- On each of the lines declaring an `etcd` node (lines in the `[all]` + section starting with etcd), use the same values as you used on the + coresponding kubenode lines in the prior step. +- If you are deploying Restund for voice/video services then on each of the + lines declaring a `restund` node (lines in the `[all]` section + beginning with restund), replace the `ansible_host` values (`X.X.X.X`) + with the IPs of the nodes that you can connect to via SSH. +- Edit the minio variables in `[minio:vars]` (`prefix`, `domain` and `deeplink_title`) + by replacing `example.com` with your own domain. + +There are more settings in this file that we will set in later steps. + +% TODO: remove this warning, and remove the hostname run from the cassandra playbook, or find another way to deal with it. + +```{warning} +Some of these playbooks mess with the hostnames of their targets. You +MUST pick different hosts for playbooks that rename the host. If you +e.g. attempt to run Cassandra and k8s on the same 3 machines, the +hostnames will be overwritten by the second installation playbook, +breaking the first. + +At the least, we know that the cassandra, kubernetes and restund playbooks are +guilty of hostname manipulation. +``` + +### Authentication + +```{eval-rst} +.. include:: includes/ansible-authentication-blob.rst +``` + +## Running ansible to install software on your machines + +You can install kubernetes, cassandra, restund, etc in any order. + +```{note} +In case you only have a single network interface with public IPs but wish to protect inter-database communication, you may use the `tinc.yml` playbook to create a private network interface. In this case, ensure tinc is setup BEFORE running any other playbook. See {ref}`tinc` +``` + +### Installing kubernetes + +Kubernetes is installed via ansible. + +To install kubernetes: + +From `wire-server-deploy/ansible`: + +``` +ansible-playbook -i hosts.ini kubernetes.yml -vv +``` + +When the playbook finishes correctly (which can take up to 20 minutes), you should have a folder `artifacts` containing a file `admin.conf`. Copy this file: + +``` +mkdir -p ~/.kube +cp artifacts/admin.conf ~/.kube/config +``` + +Make sure you can reach the server: + +``` +kubectl version +``` + +should give output similar to this: + +``` +Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"} +Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:15:20Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"} +``` + +### Cassandra + +- If you would like to change the name of the cluster, in your + 'hosts.ini' file, in the `[cassandra:vars]` section, uncomment + the line that changes 'cassandra_clustername', and change default + to be the name you want the cluster to have. +- If you want cassandra nodes to talk to each other on a specific + network interface, rather than the one you use to connect via SSH, + In your 'hosts.ini' file, in the `[all:vars]` section, + uncomment, and set 'cassandra_network_interface' to the name of + the ethernet interface you want cassandra nodes to talk to each + other on. For example: + +```ini +[cassandra:vars] +# cassandra_clustername: default + +[all:vars] +## set to True if using AWS +is_aws_environment = False +## Set the network interface name for cassandra to bind to if you have more than one network interface +cassandra_network_interface = eth0 +``` + +(see +[defaults/main.yml](https://github.com/wireapp/ansible-cassandra/blob/master/defaults/main.yml) +for a full list of variables to change if necessary) + +- Use ansible to deploy Cassandra: + +``` +ansible-playbook -i hosts.ini cassandra.yml -vv +``` + +### ElasticSearch + +- In your 'hosts.ini' file, in the `[all:vars]` section, uncomment + and set 'elasticsearch_network_interface' to the name of the + interface you want elasticsearch nodes to talk to each other on. +- If you are performing an offline install, or for some other reason + are using an APT mirror other than the default to retrieve + elasticsearch-oss packages from, you need to specify that mirror + by setting 'es_apt_key' and 'es_apt_url' in the `[all:vars]` + section of your hosts.ini file. + +```ini +[all:vars] +# default first interface on ubuntu on kvm: +elasticsearch_network_interface=ens3 + +## Set these in order to use an APT mirror other than the default. +# es_apt_key = "https:///linux/ubuntu/gpg" +# es_apt_url = "deb [trusted=yes] https:///apt bionic stable" +``` + +- Use ansible and deploy ElasticSearch: + +``` +ansible-playbook -i hosts.ini elasticsearch.yml -vv +``` + +### Minio + +Minio is used for asset storage, in the case that you are not +running on AWS infrastructure, or feel uncomfortable storing assets +in S3 in encrypted form. If you are using S3 instead of Minio, skip +this step. + +- In your 'hosts.ini' file, in the `[all:vars]` section, make sure + you set the 'minio_network_interface' to the name of the interface + you want minio nodes to talk to each other on. The default from the + playbook is not going to be correct for your machine. For example: +- In your 'hosts.ini' file, in the `[minio:vars]` section, ensure you + set minio_access_key and minio_secret key. +- If you intend to use a `deep link` to configure your clients to + talk to the backend, you need to specify your domain (and optionally + your prefix), so that links to your deep link json file are generated + correctly. By configuring these values, you fill in the blanks of + `https://{{ prefix }}assets.{{ domain }}`. + +```ini +[minio:vars] +minio_access_key = "REPLACE_THIS_WITH_THE_DESIRED_SECRET_KEY" +minio_secret_key = "REPLACE_THIS_WITH_THE_DESIRED_SECRET_KEY" +# if you want to use deep links for client configuration: +#minio_deeplink_prefix = "" +#minio_deeplink_domain = "example.com" + +[all:vars] +# Default first interface on ubuntu on kvm: +minio_network_interface=ens3 +``` + +- Use ansible, and deploy Minio: + +``` +ansible-playbook -i hosts.ini minio.yml -vv +``` + +### Restund + +For instructions on how to install Restund, see {ref}`this page `. + +### IMPORTANT checks + +> After running the above playbooks, it is important to ensure that everything is setup correctly. Please have a look at the post install checks in the section {ref}`checks` + +``` +ansible-playbook -i hosts.ini cassandra-verify-ntp.yml -vv +``` + +### Installing helm charts - prerequisites + +The `helm_external.yml` playbook is used to write or update the IPs of the +databases servers in the `values/-external/values.yaml` files, and +thus make them available for helm and the `-external` charts (e.g. +`cassandra-external`, `elasticsearch-external`, etc). + +Due to limitations in the playbook, make sure that you have defined the +network interfaces for each of the database services in your hosts.ini, +even if they are running on the same interface that you connect to via SSH. +In your hosts.ini under `[all:vars]`: + +```ini +[all:vars] +minio_network_interface = ... +cassandra_network_interface = ... +elasticsearch_network_interface = ... +# if you're using redis external... +redis_network_interface = ... +``` + +Now run the helm_external.yml playbook, to populate network values for helm: + +``` +ansible-playbook -i hosts.ini -vv --diff helm_external.yml +``` + +You can now can install the helm charts. + +#### Next steps for high-available production installation + +Your next step will be {ref}`helm-prod` diff --git a/docs/src/how-to/install/ansible-VMs.rst b/docs/src/how-to/install/ansible-VMs.rst deleted file mode 100644 index 46c818f211..0000000000 --- a/docs/src/how-to/install/ansible-VMs.rst +++ /dev/null @@ -1,277 +0,0 @@ -.. _ansible_vms: - -Installing kubernetes and databases on VMs with ansible -======================================================= - -Introduction ------------- - -In a production environment, some parts of the wire-server -infrastructure (such as e.g. cassandra databases) are best configured -outside kubernetes. Additionally, kubernetes can be rapidly set up with -kubespray, via ansible. This section covers installing VMs with ansible. - -Assumptions ------------ - -- A bare-metal setup (no cloud provider) -- All machines run ubuntu 18.04 -- All machines have static IP addresses -- Time on all machines is being kept in sync -- You have the following virtual machines: - -.. include:: includes/vm-table.rst - -(It's up to you how you create these machines - kvm on a bare metal -machine, VM on a cloud provider, real physical machines, etc.) - -Preparing to run ansible ------------------------- - -.. _adding-ips-to-hostsini: - -.. TODO: section header unifications/change - -Adding IPs to hosts.ini -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Go to your checked-out wire-server-deploy/ansible folder:: - - cd wire-server-deploy/ansible - -Copy the example hosts file:: - - cp hosts.example.ini hosts.ini - -- Edit the hosts.ini, setting the permanent IPs of the hosts you are - setting up wire on. -- On each of the lines declaring a database service node ( - lines in the ``[all]`` section beginning with cassandra, elasticsearch, - or minio) replace the ``ansible_host`` values (``X.X.X.X``) with the - IPs of the nodes that you can connect to via SSH. these are the - 'internal' addresses of the machines, not what a client will be - connecting to. -- On each of the lines declaring a kubernetes node (lines in the ``[all]`` - section starting with 'kubenode') replace the ``ip`` values - (``Y.Y.Y.Y``) with the IPs which you wish kubernetes to provide - services to clients on, and replace the ``ansible_host`` values - (``X.X.X.X``) with the IPs of the nodes that you can connect to via - SSH. If the IP you want to provide services on is the same IP that - you use to connect, remove the ``ip=Y.Y.Y.Y`` completely. -- On each of the lines declaring an ``etcd`` node (lines in the ``[all]`` - section starting with etcd), use the same values as you used on the - coresponding kubenode lines in the prior step. -- If you are deploying Restund for voice/video services then on each of the - lines declaring a ``restund`` node (lines in the ``[all]`` section - beginning with restund), replace the ``ansible_host`` values (``X.X.X.X``) - with the IPs of the nodes that you can connect to via SSH. -- Edit the minio variables in ``[minio:vars]`` (``prefix``, ``domain`` and ``deeplink_title``) - by replacing ``example.com`` with your own domain. - -There are more settings in this file that we will set in later steps. - -.. TODO: remove this warning, and remove the hostname run from the cassandra playbook, or find another way to deal with it. - -.. warning:: - - Some of these playbooks mess with the hostnames of their targets. You - MUST pick different hosts for playbooks that rename the host. If you - e.g. attempt to run Cassandra and k8s on the same 3 machines, the - hostnames will be overwritten by the second installation playbook, - breaking the first. - - At the least, we know that the cassandra, kubernetes and restund playbooks are - guilty of hostname manipulation. - -Authentication -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. include:: includes/ansible-authentication-blob.rst - -Running ansible to install software on your machines ------------------------------------------------------ - -You can install kubernetes, cassandra, restund, etc in any order. - -.. note:: - - In case you only have a single network interface with public IPs but wish to protect inter-database communication, you may use the ``tinc.yml`` playbook to create a private network interface. In this case, ensure tinc is setup BEFORE running any other playbook. See :ref:`tinc` - -Installing kubernetes -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Kubernetes is installed via ansible. - -To install kubernetes: - -From ``wire-server-deploy/ansible``:: - - ansible-playbook -i hosts.ini kubernetes.yml -vv - -When the playbook finishes correctly (which can take up to 20 minutes), you should have a folder ``artifacts`` containing a file ``admin.conf``. Copy this file:: - - mkdir -p ~/.kube - cp artifacts/admin.conf ~/.kube/config - -Make sure you can reach the server:: - - kubectl version - -should give output similar to this:: - - Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"} - Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:15:20Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"} - -Cassandra -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -- If you would like to change the name of the cluster, in your - 'hosts.ini' file, in the ``[cassandra:vars]`` section, uncomment - the line that changes 'cassandra_clustername', and change default - to be the name you want the cluster to have. -- If you want cassandra nodes to talk to each other on a specific - network interface, rather than the one you use to connect via SSH, - In your 'hosts.ini' file, in the ``[all:vars]`` section, - uncomment, and set 'cassandra_network_interface' to the name of - the ethernet interface you want cassandra nodes to talk to each - other on. For example: - -.. code:: ini - - [cassandra:vars] - # cassandra_clustername: default - - [all:vars] - ## set to True if using AWS - is_aws_environment = False - ## Set the network interface name for cassandra to bind to if you have more than one network interface - cassandra_network_interface = eth0 - -(see -`defaults/main.yml `__ -for a full list of variables to change if necessary) - -- Use ansible to deploy Cassandra: - -:: - - ansible-playbook -i hosts.ini cassandra.yml -vv - -ElasticSearch -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -- In your 'hosts.ini' file, in the ``[all:vars]`` section, uncomment - and set 'elasticsearch_network_interface' to the name of the - interface you want elasticsearch nodes to talk to each other on. -- If you are performing an offline install, or for some other reason - are using an APT mirror other than the default to retrieve - elasticsearch-oss packages from, you need to specify that mirror - by setting 'es_apt_key' and 'es_apt_url' in the ``[all:vars]`` - section of your hosts.ini file. - -.. code:: ini - - [all:vars] - # default first interface on ubuntu on kvm: - elasticsearch_network_interface=ens3 - - ## Set these in order to use an APT mirror other than the default. - # es_apt_key = "https:///linux/ubuntu/gpg" - # es_apt_url = "deb [trusted=yes] https:///apt bionic stable" - -- Use ansible and deploy ElasticSearch: - -:: - - ansible-playbook -i hosts.ini elasticsearch.yml -vv - -Minio -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Minio is used for asset storage, in the case that you are not -running on AWS infrastructure, or feel uncomfortable storing assets -in S3 in encrypted form. If you are using S3 instead of Minio, skip -this step. - - -- In your 'hosts.ini' file, in the ``[all:vars]`` section, make sure - you set the 'minio_network_interface' to the name of the interface - you want minio nodes to talk to each other on. The default from the - playbook is not going to be correct for your machine. For example: -- In your 'hosts.ini' file, in the ``[minio:vars]`` section, ensure you - set minio_access_key and minio_secret key. -- If you intend to use a ``deep link`` to configure your clients to - talk to the backend, you need to specify your domain (and optionally - your prefix), so that links to your deep link json file are generated - correctly. By configuring these values, you fill in the blanks of - ``https://{{ prefix }}assets.{{ domain }}``. - -.. code:: ini - - [minio:vars] - minio_access_key = "REPLACE_THIS_WITH_THE_DESIRED_SECRET_KEY" - minio_secret_key = "REPLACE_THIS_WITH_THE_DESIRED_SECRET_KEY" - # if you want to use deep links for client configuration: - #minio_deeplink_prefix = "" - #minio_deeplink_domain = "example.com" - - [all:vars] - # Default first interface on ubuntu on kvm: - minio_network_interface=ens3 - -- Use ansible, and deploy Minio: - -:: - - ansible-playbook -i hosts.ini minio.yml -vv - -Restund -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -For instructions on how to install Restund, see :ref:`this page `. - - -IMPORTANT checks -^^^^^^^^^^^^^^^^ - - After running the above playbooks, it is important to ensure that everything is setup correctly. Please have a look at the post install checks in the section :ref:`checks` - -:: - - ansible-playbook -i hosts.ini cassandra-verify-ntp.yml -vv - -Installing helm charts - prerequisites -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -The ``helm_external.yml`` playbook is used to write or update the IPs of the -databases servers in the ``values/-external/values.yaml`` files, and -thus make them available for helm and the ``-external`` charts (e.g. -``cassandra-external``, ``elasticsearch-external``, etc). - -Due to limitations in the playbook, make sure that you have defined the -network interfaces for each of the database services in your hosts.ini, -even if they are running on the same interface that you connect to via SSH. -In your hosts.ini under ``[all:vars]``: - -.. code:: ini - - [all:vars] - minio_network_interface = ... - cassandra_network_interface = ... - elasticsearch_network_interface = ... - # if you're using redis external... - redis_network_interface = ... - - -Now run the helm_external.yml playbook, to populate network values for helm: - -:: - - ansible-playbook -i hosts.ini -vv --diff helm_external.yml - -You can now can install the helm charts. - -Next steps for high-available production installation -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Your next step will be :ref:`helm_prod` diff --git a/docs/src/how-to/install/ansible-authentication.md b/docs/src/how-to/install/ansible-authentication.md new file mode 100644 index 0000000000..9943d7514b --- /dev/null +++ b/docs/src/how-to/install/ansible-authentication.md @@ -0,0 +1,63 @@ +(ansible-authentication)= + +# Manage ansible authentication settings + +Ansible works best if + +- you use ssh keys, not passwords +- the user you use to ssh is either `root` or can become `root` (can run `sudo su -`) without entering a password + +However, other options are possible, see below: + +## How to use password authentication when you ssh to a machine with ansible + +If, instead of using ssh keys to ssh to a remote machine, you want to use passwords: + +``` +sudo apt install sshpass +``` + +- in hosts.ini, uncomment the 'ansible_user = ...' line, and change '...' to the user you want to login as. +- in hosts.ini, uncomment the 'ansible_ssh_pass = ...' line, and change '...' to the password for the user you are logging in as. +- in hosts.ini, uncomment the 'ansible_become_pass = ...' line, and change the ... to the password you'd enter to sudo. + +## Configuring SSH keys + +(from ) If you +want a bit higher security, you can copy SSH keys between the machine +you are administrating with, and the machines you are managing with +ansible. + +- Create an SSH key. + +``` +ssh-keygen -t rsa +``` + +- Install your SSH key on each of the machines you are managing with + ansible, so that you can SSH into them without a password: + +``` +ssh-copy-id -i ~/.ssh/id_rsa.pub $USERNAME@$IP +``` + +Replace `$USERNAME` with the username of the account you set up when +you installed the machine. + +## Sudo without password + +Ansible can be configured to use a password for switching from the +unpriviledged \$USERNAME to the root user. This involves having the +password lying about, so has security problems. If you want ansible to +not be prompted for any administrative command (a different security +problem!): + +- As root on each of the nodes, add the following line at the end of + the /etc/sudoers file: + +``` + ALL=(ALL) NOPASSWD:ALL +``` + +Replace `` with the username of the account +you set up when you installed the machine. diff --git a/docs/src/how-to/install/ansible-authentication.rst b/docs/src/how-to/install/ansible-authentication.rst deleted file mode 100644 index 8e549fb64c..0000000000 --- a/docs/src/how-to/install/ansible-authentication.rst +++ /dev/null @@ -1,66 +0,0 @@ -.. _ansible-authentication: - -Manage ansible authentication settings -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Ansible works best if - -* you use ssh keys, not passwords -* the user you use to ssh is either ``root`` or can become ``root`` (can run ``sudo su -``) without entering a password - -However, other options are possible, see below: - - -How to use password authentication when you ssh to a machine with ansible -'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' - -If, instead of using ssh keys to ssh to a remote machine, you want to use passwords:: - - sudo apt install sshpass - -* in hosts.ini, uncomment the 'ansible_user = ...' line, and change '...' to the user you want to login as. -* in hosts.ini, uncomment the 'ansible_ssh_pass = ...' line, and change '...' to the password for the user you are logging in as. -* in hosts.ini, uncomment the 'ansible_become_pass = ...' line, and change the ... to the password you'd enter to sudo. - -Configuring SSH keys -'''''''''''''''''''' - -(from https://linoxide.com/how-tos/ssh-login-with-public-key/) If you -want a bit higher security, you can copy SSH keys between the machine -you are administrating with, and the machines you are managing with -ansible. - -- Create an SSH key. - -:: - - ssh-keygen -t rsa - -- Install your SSH key on each of the machines you are managing with - ansible, so that you can SSH into them without a password: - -:: - - ssh-copy-id -i ~/.ssh/id_rsa.pub $USERNAME@$IP - -Replace ``$USERNAME`` with the username of the account you set up when -you installed the machine. - -Sudo without password -''''''''''''''''''''' - -Ansible can be configured to use a password for switching from the -unpriviledged $USERNAME to the root user. This involves having the -password lying about, so has security problems. If you want ansible to -not be prompted for any administrative command (a different security -problem!): - -- As root on each of the nodes, add the following line at the end of - the /etc/sudoers file: - -:: - - ALL=(ALL) NOPASSWD:ALL - -Replace ```` with the username of the account -you set up when you installed the machine. diff --git a/docs/src/how-to/install/ansible-tinc.md b/docs/src/how-to/install/ansible-tinc.md new file mode 100644 index 0000000000..294c1faa99 --- /dev/null +++ b/docs/src/how-to/install/ansible-tinc.md @@ -0,0 +1,54 @@ +(tinc)= + +# tinc + +Installing [tinc mesh vpn](http://tinc-vpn.org/) is *optional and +experimental*. It allows having a private network interface `vpn0` on +the target VMs. + +```{warning} +We currently only use tinc for test clusters and have not made sure if the default settings it comes with provide adequate security to protect your data. If using tinc and the following tinc.yml playbook, make your own checks first! +``` + +```{note} +Ensure to run the tinc.yml playbook first if you use tinc, before +other playbooks. +``` + +From `wire-server-deploy/ansible`, where you created a `hosts.ini` file. + +- Add a `vpn_ip=Z.Z.Z.Z` item to each entry in the hosts file with a + (fresh) IP range if you wish to use tinc. +- Add a group `vpn`: + +```ini +# this is a minimal example +[all] +server1 ansible_host=X.X.X.X vpn_ip=10.10.1.XXX +server2 ansible_host=X.X.X.X vpn_ip=10.10.1.YYY + +[cassandra] +server1 +server2 + +[vpn:children] +cassandra +# add other server groups here as necessary +``` + +Also ensure subsequent playbooks make use of the newly-created interface by setting: + +```ini +[all:vars] +minio_network_interface = vpn0 +cassandra_network_interface = vpn0 +elasticsearch_network_interface = vpn0 +redis_network_interface = vpn0 +``` + +Configure the physical network interface inside tinc.yml if it is not +`eth0`. Then: + +``` +ansible-playbook -i hosts.ini tinc.yml -vv +``` diff --git a/docs/src/how-to/install/ansible-tinc.rst b/docs/src/how-to/install/ansible-tinc.rst deleted file mode 100644 index ca5698b7ab..0000000000 --- a/docs/src/how-to/install/ansible-tinc.rst +++ /dev/null @@ -1,54 +0,0 @@ -.. _tinc: - -tinc ----- - -Installing `tinc mesh vpn `__ is *optional and -experimental*. It allows having a private network interface ``vpn0`` on -the target VMs. - -.. warning:: - We currently only use tinc for test clusters and have not made sure if the default settings it comes with provide adequate security to protect your data. If using tinc and the following tinc.yml playbook, make your own checks first! - -.. note:: - - Ensure to run the tinc.yml playbook first if you use tinc, before - other playbooks. - -From ``wire-server-deploy/ansible``, where you created a `hosts.ini` file. - -- Add a ``vpn_ip=Z.Z.Z.Z`` item to each entry in the hosts file with a - (fresh) IP range if you wish to use tinc. -- Add a group ``vpn``: - -.. code:: ini - - # this is a minimal example - [all] - server1 ansible_host=X.X.X.X vpn_ip=10.10.1.XXX - server2 ansible_host=X.X.X.X vpn_ip=10.10.1.YYY - - [cassandra] - server1 - server2 - - [vpn:children] - cassandra - # add other server groups here as necessary - -Also ensure subsequent playbooks make use of the newly-created interface by setting: - -.. code:: ini - - [all:vars] - minio_network_interface = vpn0 - cassandra_network_interface = vpn0 - elasticsearch_network_interface = vpn0 - redis_network_interface = vpn0 - -Configure the physical network interface inside tinc.yml if it is not -``eth0``. Then: - -:: - - ansible-playbook -i hosts.ini tinc.yml -vv diff --git a/docs/src/how-to/install/aws-prod.md b/docs/src/how-to/install/aws-prod.md new file mode 100644 index 0000000000..0359d98d71 --- /dev/null +++ b/docs/src/how-to/install/aws-prod.md @@ -0,0 +1,36 @@ +(aws-prod)= + +# Configuring AWS and wire-server (production) components + +## Introduction + +The following procedures are for configuring wire-server on top of AWS. They are not required to use wire-server in AWS, but they may be a good idea, depending on the AWS features you are comfortable using. + +## Using real AWS services for SNS + +AWS SNS is required to send notification events to clients via [FCM](https://firebase.google.com/docs/cloud-messaging/)/[APNS](https://developer.apple.com/notifications/) . These notification channels are useable only for clients that are connected from the public internet. Using these vendor provided communication channels allows client devices (phones) running a wire client to save a considerable amount of battery life, compared to the websockets approach. + +For details on how to set up SNS in cooperation with us (We - Wire - will proxy push notifications through Amazon for you), see {ref}`push-sns`. + +## Using real AWS services for SES / SQS + +AWS SES and SQS are used for delivering emails to clients, and for receiving notifications of bounced emails. SQS is also used internally, in order to facilitate batch user deletion. + +FIXME: detail this step. + +## Using real AWS services for S3 + +S3-style services are used by cargohold to store encrypted files that users are sharing amongst each other, profile pics, etc. + +Defining S3 services: +Create an S3 bucket in the region you are hosting your wire servers in. For example terraform code, see: + +The S3 bucket you create should have it's contents downloadable from the internet, as clients get the content directly from S3, rather than having to talk through the wire backend. + +Using S3 services: + +There are three values in the `cargohold.config.aws` section of your 'values.yaml' that you need to provide while deploying wire-server: + +- s3Bucket: the name of the S3 bucket you have created. +- s3Endpoint: the S3 service endpoint cargohold should talk to, to place files in the S3 bucket. On AWS, this takes the form of: `https://.s3-.amazonaws.com`. +- s3DownloadEndpoint: The URL base that clients should use to get contents from the S3 bucket. On AWS, this takes the form of: `https://s3..amazonaws.com`. diff --git a/docs/src/how-to/install/aws-prod.rst b/docs/src/how-to/install/aws-prod.rst deleted file mode 100644 index 0cf147bc20..0000000000 --- a/docs/src/how-to/install/aws-prod.rst +++ /dev/null @@ -1,39 +0,0 @@ -.. _aws_prod: - -Configuring AWS and wire-server (production) components -======================================================= - -Introduction ------------- - -The following procedures are for configuring wire-server on top of AWS. They are not required to use wire-server in AWS, but they may be a good idea, depending on the AWS features you are comfortable using. - -Using real AWS services for SNS --------------------------------------------------------- -AWS SNS is required to send notification events to clients via `FCM `__/`APNS `__ . These notification channels are useable only for clients that are connected from the public internet. Using these vendor provided communication channels allows client devices (phones) running a wire client to save a considerable amount of battery life, compared to the websockets approach. - -For details on how to set up SNS in cooperation with us (We - Wire - will proxy push notifications through Amazon for you), see :ref:`pushsns`. - -Using real AWS services for SES / SQS ---------------------------------------------- -AWS SES and SQS are used for delivering emails to clients, and for receiving notifications of bounced emails. SQS is also used internally, in order to facilitate batch user deletion. - -FIXME: detail this step. - -Using real AWS services for S3 ------------------------------- -S3-style services are used by cargohold to store encrypted files that users are sharing amongst each other, profile pics, etc. - -Defining S3 services: -Create an S3 bucket in the region you are hosting your wire servers in. For example terraform code, see: https://github.com/wireapp/wire-server-deploy/tree/develop/terraform/modules/aws-cargohold-asset-storage - -The S3 bucket you create should have it's contents downloadable from the internet, as clients get the content directly from S3, rather than having to talk through the wire backend. - -Using S3 services: - -There are three values in the ``cargohold.config.aws`` section of your 'values.yaml' that you need to provide while deploying wire-server: - -* s3Bucket: the name of the S3 bucket you have created. -* s3Endpoint: the S3 service endpoint cargohold should talk to, to place files in the S3 bucket. On AWS, this takes the form of: ``https://.s3-.amazonaws.com``. -* s3DownloadEndpoint: The URL base that clients should use to get contents from the S3 bucket. On AWS, this takes the form of: ``https://s3..amazonaws.com``. - diff --git a/docs/src/how-to/install/configuration-options.rst b/docs/src/how-to/install/configuration-options.rst deleted file mode 100644 index 94a6136cc6..0000000000 --- a/docs/src/how-to/install/configuration-options.rst +++ /dev/null @@ -1,1060 +0,0 @@ -.. _configuration_options: - -Part 3 - configuration options in a production setup -==================================================================== - -This contains instructions to configure specific aspects of your production setup depending on your needs. - -Depending on your use-case and requirements, you may need to -configure none, or only a subset of the following sections. - -Redirect some traffic through a http(s) proxy ---------------------------------------------- - -In case you wish to use http(s) proxies, you can add a configuration like this to the wire-server services in question: - -Assuming your proxy can be reached from within Kubernetes at ``http://proxy:8080``, add the following for each affected service (e.g. ``gundeck``) to your Helm overrides in ``values/wire-server/values.yaml`` : - -.. code:: yaml - - gundeck: - # ... - config: - # ... - proxy: - httpProxy: "http://proxy:8080" - httpsProxy: "http://proxy:8080" - noProxyList: - - "localhost" - - "127.0.0.1" - - "10.0.0.0/8" - - "elasticsearch-external" - - "cassandra-external" - - "redis-ephemeral" - - "fake-aws-sqs" - - "fake-aws-dynamodb" - - "fake-aws-sns" - - "brig" - - "cargohold" - - "galley" - - "gundeck" - - "proxy" - - "spar" - - "federator" - - "cannon" - - "cannon-0.cannon.default" - - "cannon-1.cannon.default" - - "cannon-2.cannon.default" - -Depending on your setup, you may need to repeat this for the other services like ``brig`` as well. - -.. _pushsns: - -Enable push notifications using the public appstore / playstore mobile Wire clients ------------------------------------------------------------------------------------ - -1. You need to get in touch with us. Please talk to sales or customer support - see https://wire.com -2. If a contract agreement has been reached, we can set up a separate AWS account for you containing the necessary AWS SQS/SNS setup to route push notifications through to the mobile apps. We will then forward some configuration / access credentials that looks like: - -.. code:: yaml - - gundeck: - config: - aws: - account: "" - arnEnv: "" - queueName: "-gundeck-events" - region: "" - snsEndpoint: "https://sns..amazonaws.com" - sqsEndpoint: "https://sqs..amazonaws.com" - secrets: - awsKeyId: "" - awsSecretKey: "" - -To make use of those, first test the credentials are correct, e.g. using the ``aws`` command-line tool (for more information on how to configure credentials, please refer to the `official docs `__): - -.. code:: - - AWS_REGION= - AWS_ACCESS_KEY_ID=<...> - AWS_SECRET_ACCESS_KEY=<...> - ENV= #e.g staging - - aws sqs get-queue-url --queue-name "$ENV-gundeck-events" - -You should get a result like this: - -.. code:: - - { - "QueueUrl": "https://.queue.amazonaws.com//-gundeck-events" - } - -Then add them to your gundeck configuration overrides. - -Keys below ``gundeck.config`` belong into ``values/wire-server/values.yaml``: - -.. code:: yaml - - gundeck: - # ... - config: - aws: - queueName: # e.g. staging-gundeck-events - account: # , e.g. 123456789 - region: # e.g. eu-central-1 - snsEndpoint: # e.g. https://sns.eu-central-1.amazonaws.com - sqsEndpoint: # e.g. https://sqs.eu-central-1.amazonaws.com - arnEnv: # e.g. staging - this must match the environment name (first part of queueName) - -Keys below ``gundeck.secrets`` belong into ``values/wire-server/secrets.yaml``: - -.. code:: yaml - - gundeck: - # ... - secrets: - awsKeyId: CHANGE-ME - awsSecretKey: CHANGE-ME - - -After making this change and applying it to gundeck (ensure gundeck pods have restarted to make use of the updated configuration - that should happen automatically), make sure to reset the push token on any mobile devices that you may have in use. - -Next, if you want, you can stop using the `fake-aws-sns` pods in case you ran them before: - -.. code:: yaml - - # inside override values/fake-aws/values.yaml - fake-aws-sns: - enabled: false - -Controlling the speed of websocket draining during cannon pod replacement -------------------------------------------------------------------------- - -The 'cannon' component is responsible for persistent websocket connections. -Normally the default options would slowly and gracefully drain active websocket -connections over a maximum of ``(amount of cannon replicas * 30 seconds)`` during -the deployment of a new wire-server version. This will lead to a very brief -interruption for Wire clients when their client has to re-connect on the -websocket. - -You're not expected to need to change these settings. - -The following options are only relevant during the restart of cannon itself. -During a restart of nginz or ingress-controller, all websockets will get -severed. If this is to be avoided, see section :ref:`separate-websocket-traffic` - -``drainOpts``: Drain websockets in a controlled fashion when cannon receives a -SIGTERM or SIGINT (this happens when a pod is terminated e.g. during rollout -of a new version). Instead of waiting for connections to close on their own, -the websockets are now severed at a controlled pace. This allows for quicker -rollouts of new versions. - -There is no way to entirely disable this behaviour, two extreme examples below - -* the quickest way to kill cannon is to set ``gracePeriodSeconds: 1`` and - ``minBatchSize: 100000`` which would sever all connections immediately; but it's - not recommended as you could DDoS yourself by forcing all active clients to - reconnect at the same time. With this, cannon pod replacement takes only 1 - second per pod. -* the slowest way to roll out a new version of cannon without severing websocket - connections for a long time is to set ``minBatchSize: 1``, - ``millisecondsBetweenBatches: 86400000`` and ``gracePeriodSeconds: 86400`` - which would lead to one single websocket connection being closed immediately, - and all others only after 1 day. With this, cannon pod replacement takes a - full day per pod. - -.. code:: yaml - - # overrides for wire-server/values.yaml - cannon: - drainOpts: - # The following defaults drain a minimum of 400 connections/second - # for a total of 10000 over 25 seconds - # (if cannon holds more connections, draining will happen at a faster pace) - gracePeriodSeconds: 25 - millisecondsBetweenBatches: 50 - minBatchSize: 20 - - -Control nginz upstreams (routes) into the Kubernetes cluster ------------------------------------------------------------- - -Open unterminated upstreams (routes) into the Kubernetes cluster are a potential -security issue. To prevent this, there are fine-grained settings in the nginz -configuration defining which upstreams should exist. - -Default upstreams -^^^^^^^^^^^^^^^^^ - -Upstreams for services that exist in (almost) every Wire installation are -enabled by default. These are: - -- ``brig`` -- ``cannon`` -- ``cargohold`` -- ``galley`` -- ``gundeck`` -- ``spar`` - -For special setups (as e.g. described in separate-websocket-traffic_) the -upstreams of these services can be ignored (disabled) with the setting -``nginz.nginx_conf.ignored_upstreams``. - -The most common example is to disable the upstream of ``cannon``: - -.. code:: yaml - - nginz: - nginx_conf: - ignored_upstreams: ["cannon"] - - -Optional upstreams -^^^^^^^^^^^^^^^^^^ - -There are some services that are usually not deployed on most Wire installations -or are specific to the Wire cloud: - -- ``ibis`` -- ``galeb`` -- ``calling-test`` -- ``proxy`` - -The upstreams for those are disabled by default and can be enabled by the -setting ``nginz.nginx_conf.enabled_extra_upstreams``. - -The most common example is to enable the (extra) upstream of ``proxy``: - -.. code:: yaml - - nginz: - nginx_conf: - enabled_extra_upstreams: ["proxy"] - - -Combining default and extra upstream configurations -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Default and extra upstream configurations are independent of each other. I.e. -``nginz.nginx_conf.ignored_upstreams`` and -``nginz.nginx_conf.enabled_extra_upstreams`` can be combined in the same -configuration: - -.. code:: yaml - - nginz: - nginx_conf: - ignored_upstreams: ["cannon"] - enabled_extra_upstreams: ["proxy"] - - -.. _separate-websocket-traffic: - -Separate incoming websocket network traffic from the rest of the https traffic -------------------------------------------------------------------------------- - -By default, incoming network traffic for websockets comes through these network -hops: - -Internet -> LoadBalancer -> kube-proxy -> nginx-ingress-controller -> nginz -> cannon - -In order to have graceful draining of websockets when something gets restarted, as it is not easily -possible to implement the graceful draining on nginx-ingress-controller or nginz by itself, there is -a configuration option to get the following network hops: - -Internet -> separate LoadBalancer for cannon only -> kube-proxy -> [nginz->cannon (2 containers in the same pod)] - -.. code:: yaml - - # example on AWS when using cert-manager for TLS certificates and external-dns for DNS records - # (see wire-server/charts/cannon/values.yaml for more possible options) - - # in your wire-server/values.yaml overrides: - cannon: - service: - nginz: - enabled: true - hostname: "nginz-ssl.example.com" - externalDNS: - enabled: true - certManager: - enabled: true - annotations: - service.beta.kubernetes.io/aws-load-balancer-type: "nlb" - service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing" - nginz: - nginx_conf: - ignored_upstreams: ["cannon"] - -.. code:: yaml - - # in your wire-server/secrets.yaml overrides: - cannon: - secrets: - nginz: - zAuth: - publicKeys: ... # same values as in nginz.secrets.zAuth.publicKeys - -.. code:: yaml - - # in your nginx-ingress-services/values.yaml overrides: - websockets: - enabled: false - - -Blocking creation of personal users, new teams ----------------------------------------------- - -In Brig -~~~~~~~ - -There are some unauthenticated end-points that allow arbitrary users on the open internet to do things like create a new team. This is desired in the cloud, but if you run an on-prem setup that is open to the world, you may want to block this. - -Brig has a server option for this: - -.. code:: yaml - - optSettings: - setRestrictUserCreation: true - -If `setRestrictUserCreation` is `true`, creating new personal users or new teams on your instance from outside your backend installation is impossible. (If you want to be more technical: requests to `/register` that create a new personal account or a new team are answered with `403 forbidden`.) - -On instances with restricted user creation, the site operator with access to the internal REST API can still circumvent the restriction: just log into a brig service pod via ssh and follow the steps in `hack/bin/create_test_team_admins.sh.` - -.. note:: - Once the creation of new users and teams has been disabled, it will still be possible to use the `team creation process `__ (enter the new team name, email, password, etc), but it will fail/refuse creation late in the creation process (after the «Create team» button is clicked). - -In the WebApp -~~~~~~~~~~~~~ - -Another way of disabling user registration is by this webapp setting, in `values.yaml`, changing this value from `true` to `false`: - -.. code:: yaml - - FEATURE_ENABLE_ACCOUNT_REGISTRATION: "false" - -.. note:: - If you only disable the creation of users in the webapp, but do not do so in Brig/the backend, a malicious user would be able to use the API to create users, so make sure to disable both. - -You may want ------------- - -- more server resources to ensure - `high-availability <#persistence-and-high-availability>`__ -- an email/SMTP server to send out registration emails -- depending on your required functionality, you may or may not need an - `AWS account `__. See details about - limitations without an AWS account in the following sections. -- one or more people able to maintain the installation -- official support by Wire (`contact us `__) - -.. warning:: - - As of 2020-08-10, the documentation sections below are partially out of date and need to be updated. - -Metrics/logging ---------------- - -* :ref:`monitoring` -* :ref:`logging` - -SMTP server ------------ - -**Assumptions**: none - -**Provides**: - -- full control over email sending - -**You need**: - -- SMTP credentials (to allow for email sending; prerequisite for - registering users and running the smoketest) - -**How to configure**: - -- *if using a gmail account, ensure to enable* `'less secure - apps' `__ -- Add user, SMTP server, connection type to ``values/wire-server``'s - values file under ``brig.config.smtp`` -- Add password in ``secrets/wire-server``'s secrets file under - ``brig.secrets.smtpPassword`` - -Load balancer on bare metal servers ------------------------------------ - -**Assumptions**: - -- You installed kubernetes on bare metal servers or virtual machines - that can bind to a public IP address. -- **If you are using AWS or another cloud provider, see**\ `Creating a - cloudprovider-based load - balancer <#load-balancer-on-cloud-provider>`__\ **instead** - -**Provides**: - -- Allows using a provided Load balancer for incoming traffic -- SSL termination is done on the ingress controller -- You can access your wire-server backend with given DNS names, over - SSL and from anywhere in the internet - -**You need**: - -- A kubernetes node with a *public* IP address (or internal, if you do - not plan to expose the Wire backend over the Internet but we will - assume you are using a public IP address) -- DNS records for the different exposed addresses (the ingress depends - on the usage of virtual hosts), namely: - - - ``nginz-https.`` - - ``nginz-ssl.`` - - ``assets.`` - - ``webapp.`` - - ``account.`` - - ``teams.`` - -- A wildcard certificate for the different hosts (``*.``) - we - assume you want to do SSL termination on the ingress controller - -**Caveats**: - -- Note that there can be only a *single* load balancer, otherwise your - cluster might become - `unstable `__ - -**How to configure**: - -:: - - cp values/metallb/demo-values.example.yaml values/metallb/demo-values.yaml - cp values/nginx-ingress-services/demo-values.example.yaml values/nginx-ingress-services/demo-values.yaml - cp values/nginx-ingress-services/demo-secrets.example.yaml values/nginx-ingress-services/demo-secrets.yaml - -- Adapt ``values/metallb/demo-values.yaml`` to provide a list of public - IP address CIDRs that your kubernetes nodes can bind to. -- Adapt ``values/nginx-ingress-services/demo-values.yaml`` with correct URLs -- Put your TLS cert and key into - ``values/nginx-ingress-services/demo-secrets.yaml``. - -Install ``metallb`` (for more information see the -`docs `__): - -.. code:: sh - - helm upgrade --install --namespace metallb-system metallb wire/metallb \ - -f values/metallb/demo-values.yaml \ - --wait --timeout 1800 - -Install ``nginx-ingress-[controller,services]``: - -:: - helm upgrade --install --namespace demo demo-nginx-ingress-controller wire/nginx-ingress-controller \ - --wait - - helm upgrade --install --namespace demo demo-nginx-ingress-services wire/nginx-ingress-services \ - -f values/nginx-ingress-services/demo-values.yaml \ - -f values/nginx-ingress-services/demo-secrets.yaml \ - --wait - -Now, create DNS records for the URLs configured above. - - -Load Balancer on cloud-provider -------------------------------- - -AWS -~~~ - -`Upload the required -certificates `__. -Create and configure ``values/aws-ingress/demo-values.yaml`` from the -examples. - -:: - - helm upgrade --install --namespace demo demo-aws-ingress wire/aws-ingress \ - -f values/aws-ingress/demo-values.yaml \ - --wait - -To give your load balancers public DNS names, create and edit -``values/external-dns/demo-values.yaml``, then run -`external-dns `__: - -:: - - helm repo update - helm upgrade --install --namespace demo demo-external-dns stable/external-dns \ - --version 1.7.3 \ - -f values/external-dns/demo-values.yaml \ - --wait - -Things to note about external-dns: - -- There can only be a single external-dns chart installed (one per - kubernetes cluster, not one per namespace). So if you already have - one running for another namespace you probably don't need to do - anything. -- You have to add the appropriate IAM permissions to your cluster (see - the - `README `__). -- Alternatively, use the AWS route53 console. - -Other cloud providers -~~~~~~~~~~~~~~~~~~~~~ - -This information is not yet available. If you'd like to contribute by -adding this information for your cloud provider, feel free to read the -`contributing guidelines `__ and open a PR. - -Real AWS services ------------------ - -**Assumptions**: - -- You installed kubernetes and wire-server on AWS - -**Provides**: - -- Better availability guarantees and possibly better functionality of - AWS services such as SQS and dynamoDB. -- You can use ELBs in front of nginz for higher availability. -- instead of using a smtp server and connect with SMTP, you may use - SES. See configuration of brig and the ``useSES`` toggle. - -**You need**: - -- An AWS account - -**How to configure**: - -- Instead of using fake-aws charts, you need to set up the respective - services in your account, create queues, tables etc. Have a look at - the fake-aws-\* charts; you'll need to replicate a similar setup. - - - Once real AWS resources are created, adapt the configuration in - the values and secrets files for wire-server to use real endpoints - and real AWS keys. Look for comments including - ``if using real AWS``. - -- Creating AWS resources in a way that is easy to create and delete - could be done using either `terraform `__ - or `pulumi `__. If you'd like to contribute by - creating such automation, feel free to read the `contributing - guidelines `__ and open a PR. - -Persistence and high-availability ---------------------------------- - -Currently, due to the way kubernetes and cassandra -`interact `__, -cassandra cannot reliably be installed on kubernetes. Some people have -tried, e.g. `this -project `__ though at -the time of writing (Nov 2018), this does not yet work as advertised. We -recommend therefore to install cassandra, (possibly also elasticsearch -and redis) separately, i.e. outside of kubernetes (using 3 nodes each). - -For further higher-availability: - -- scale your kubernetes cluster to have separate etcd and master nodes - (3 nodes each) -- use 3 instead of 1 replica of each wire-server chart - -Security --------- - -For a production deployment, you should, as a minimum: - -- Ensure traffic between kubernetes nodes, etcd and databases are - confined to a private network -- Ensure kubernetes API is unreachable from the public internet (e.g. - put behind VPN/bastion host or restrict IP range) to prevent - `kubernetes - vulnerabilities `__ - from affecting you -- Ensure your operating systems get security updates automatically -- Restrict ssh access / harden sshd configuration -- Ensure no other pods with public access than the main ingress are - deployed on your cluster, since, in the current setup, pods have - access to etcd values (and thus any secrets stored there, including - secrets from other pods) -- Ensure developers encrypt any secrets.yaml files - -Additionally, you may wish to build, sign, and host your own docker -images to have increased confidence in those images. We haved "signed -container images" on our roadmap. - -Sign up with a phone number (Sending SMS) ------------------------------------------ - -**Provides**: - -- Registering accounts with a phone number - -**You need**: - -- a `Nexmo `__ account -- a `Twilio `__ account - -**How to configure**: - -See the ``brig`` chart for configuration. - -.. _3rd-party-proxying: - -3rd-party proxying ------------------- - -You need Giphy/Google/Spotify/Soundcloud API keys (if you want to -support previews by proxying these services) - -See the ``proxy`` chart for configuration. - -Routing traffic to other namespaces via nginz ---------------------------------------------- - -If you have some components running in namespaces different from nginz. For -instance, the billing service (``ibis``) could be deployed to a separate -namespace, say ``integrations``. But it still needs to get traffic via -``nginz``. When this is needed, the helm config can be adjusted like this: - -.. code:: yaml - - # in your wire-server/values.yaml overrides: - nginz: - nginx_conf: - upstream_namespace: - ibis: integrations - -Marking an installation as self-hosted --------------------------------------- - -In case your wire installation is self-hosted (on-premise, demo installs), it needs to be aware that it is through a configuration option. As of release chart 4.15.0, `"true"` is the default behavior, and nothing needs to be done. - -If that option is not set, team-settings will prompt users about "wire for free" and associated functions. - -With that option set, all payment related functionality is disabled. - -The option is `IS_SELF_HOSTED`, and you set it in your `values.yaml` file (originally a copy of `prod-values.example.yaml` found in `wire-server-deploy/values/wire-server/`). - -In case of a demo install, replace `prod` with `demo`. - -First set the option under the `team-settings` section, `envVars` sub-section: - -.. code:: yaml - - # NOTE: Only relevant if you want team-settings - team-settings: - envVars: - IS_SELF_HOSTED: "true" - -Second, also set the option under the `account-pages` section: - -.. code:: yaml - - # NOTE: Only relevant if you want account-pages - account-pages: - envVars: - IS_SELF_HOSTED: "true" - -.. _auth-cookie-config: - -Configuring authentication cookie throttling --------------------------------------------- - -Authentication cookies and the related throttling mechanism is described in the *Client API documentation*: -:ref:`login-cookies` - -The maximum number of cookies per account and type is defined by the brig option -``setUserCookieLimit``. Its default is ``32``. - -Throttling is configured by the brig option ``setUserCookieThrottle``. It is an -object that contains two fields: - -``stdDev`` - The minimal standard deviation of cookie creation timestamps in - Seconds. (Default: ``3000``, - `Wikipedia: Standard deviation `_) - -``retryAfter`` - Wait time in Seconds when ``stdDev`` is violated. (Default: ``86400``) - -The default values are fine for most use cases. (Generally, you don't have to -configure them for your installation.) - -Condensed example: - - -.. code:: yaml - - brig: - optSettings: - setUserCookieLimit: 32 - setUserCookieThrottle: - stdDev: 3000 - retryAfter: 86400 - - -Configuring searchability -------------------------- - -You can configure how search is limited or not based on user membership in a given team. - -There are two types of searches based on the direction of search: - -* **Inbound** searches mean that somebody is searching for you. Configuring the inbound search visibility means that you (or some admin) can configure whether others can find you or not. -* **Outbound** searches mean that you are searching for somebody. Configuring the outbound search visibility means that some admin can configure whether you can find other users or not. - -There are different types of matches: - -* **Exact handle** search means that the user is found only if the search query is exactly the user handle (e.g. searching for `mc` will find `@mc` but not `@mccaine`). This search returns zero or one results. -* **Full text** search means that the user is found if the search query contains some subset of the user display name and handle. (e.g. the query `mar` will find `Marco C`, `Omar`, `@amaro`) - -Searching users on the same backend -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Search visibility is controlled by three parameters on the backend: - -* A team outbound configuration flag, `TeamSearchVisibility` with possible values `SearchVisibilityStandard`, `SearchVisibilityNoNameOutsideTeam` - - * `SearchVisibilityStandard` means that the user can find other people outside of the team, if the searched-person inbound search allows it - * `SearchVisibilityNoNameOutsideTeam` means that the user can not find any user outside the team by full text search (but exact handle search still works) - -* A team inbound configuration flag, `SearchVisibilityInbound` with possible values `SearchableByOwnTeam`, `SearchableByAllTeams` - - * `SearchableByOwnTeam` means that the user can be found only by users in their own team. - * `SearchableByAllTeams` means that the user can be found by users in any/all teams. - -* A server configuration flag `searchSameTeamOnly` with possible values true, false. - - * ``Note``: For the same backend, this affects inbound and outbound searches (simply because all teams will be subject to this behavior) - * Setting this to `true` means that the all teams on that backend can only find users that belong to their team - -These flag are set on the backend and the clients do not need to be aware of them. - -The flags will influence the behavior of the search API endpoint; clients will only need to parse the results, that are already filtered for them by the backend. - -Table of possible outcomes -.......................... - -+------------------------------------+---------------------------------+------------------------------------+------------------------------------------+-------------------------------------------+----------------------------------+--------------------------------------+ -| Is search-er (`uA`) in team (tA)? | Is search-ed (`uB`) in a team? | Backend flag `searchSameTeamOnly` | Team `tA`'s flag `TeamSearchVisibility` | Team tB's flag `SearchVisibilityInbound` | Result of exact search for `uB` | Result of full-text search for `uB` | -+====================================+=================================+====================================+==========================================+===========================================+==================================+======================================+ -| **Search within the same team** | -+------------------------------------+---------------------------------+------------------------------------+------------------------------------------+-------------------------------------------+----------------------------------+--------------------------------------+ -| Yes, `tA` | Yes, the same team `tA` | Irrelevant | Irrelevant | Irrelevant | Found | Found | -+------------------------------------+---------------------------------+------------------------------------+------------------------------------------+-------------------------------------------+----------------------------------+--------------------------------------+ -| **Outbound search unrestricted** | -+------------------------------------+---------------------------------+------------------------------------+------------------------------------------+-------------------------------------------+----------------------------------+--------------------------------------+ -| Yes, `tA` | Yes, another team tB | false | `SearchVisibilityStandard` | `SearchableByAllTeams` | Found | Found | -+------------------------------------+---------------------------------+------------------------------------+------------------------------------------+-------------------------------------------+----------------------------------+--------------------------------------+ -| Yes, `tA` | Yes, another team tB | false | `SearchVisibilityStandard` | `SearchableByOwnTeam` | Found | Not found | -+------------------------------------+---------------------------------+------------------------------------+------------------------------------------+-------------------------------------------+----------------------------------+--------------------------------------+ -| **Outbound search restricted** | -+------------------------------------+---------------------------------+------------------------------------+------------------------------------------+-------------------------------------------+----------------------------------+--------------------------------------+ -| Yes, `tA` | Yes, another team tB | true | Irrelevant | Irrelevant | Not found | Not found | -+------------------------------------+---------------------------------+------------------------------------+------------------------------------------+-------------------------------------------+----------------------------------+--------------------------------------+ -| Yes, `tA` | Yes, another team tB | false | `SearchVisibilityNoNameOutsideTeam` | Irrelevant | Found | Not found | -+------------------------------------+---------------------------------+------------------------------------+------------------------------------------+-------------------------------------------+----------------------------------+--------------------------------------+ -| Yes, `tA` | No | false | `SearchVisibilityNoNameOutsideTeam` | There’s no team B | Found | Not found | -+------------------------------------+---------------------------------+------------------------------------+------------------------------------------+-------------------------------------------+----------------------------------+--------------------------------------+ - - -Changing the configuration on the server -........................................ - -To change the `searchSameTeamOnly` setting on the backend, edit the `values.yaml.gotmpl` file for the wire-server chart at this nested level of the configuration: - -.. code:: yaml - - brig: - # ... - config: - # ... - optSettings: - # ... - setSearchSameTeamOnly: true - -If `setSearchSameTeamOnly` is set to `true` then `TeamSearchVisibility` is forced be in the `SearchVisibilityNoNameOutsideTeam` setting for all teams. - -Changing the default configuration for all teams -................................................ - -If `setSearchSameTeamOnly` is set to `false` (or missing from the configuration) then the default value `TeamSearchVisibility` can be configured at this level of the configuration of the `value.yaml.gotmpl` file of the wire-server chart: - - -.. code:: yaml - - galley: - #... - config: - #... - settings: - #... - featureFlags: - #... - teamSearchVisibility: enabled-by-default - -This default value applies to all teams for which no explicit configuration of the `TeamSearchVisibility` has been set. - - -Searching users on another (federated) backend -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -For federated search the table above does not apply, see following table. - -.. note:: - - Incoming federated searches (i.e. searches from one backend to another) are considered always as being performed from a team user, even if they are performed from a personal user. - - This is because the incoming search request does not carry the information whether the user performing the search was in a team or not. - - So we have to make one assumption, and we assume that they were in a team. - -Allowing search is done at the backend configuration level by the sysadmin: - -* Outbound search restrictions (`searchSameTeamOnly`, `TeamSearchVisibility`) do not apply to federated searches -* A configuration setting `FederatedUserSearchPolicy` per federating domain with these possible values: - - * `no_search` The federating backend is not allowed to search any users (either by exact handle or full-text). - * `exact_handle_search` The federating backend may only search by exact handle - * `full_search` The federating backend may search users by full text search on display name and handle. The search search results are additionally affected by `SearchVisibilityInbound` setting of each team on the backend. -* The `SearchVisibilityInbound` setting applies. Since the default value for teams is `SearchableByOwnTeam` this means that for a team to be full-text searchable by users on a federating backend both - - * `FederatedUserSearchPolicy` needs to be set to to full_search for the federating backend - * Any team that wants to be full-text searchable needs to be set to `SearchableByAllTeams` - -The configuration value `FederatedUserSearchPolicy` is per federated domain, e.g. in the values of the wire-server chart: - -.. code:: yaml - - brig: - config: - optSettings: - setFederationDomainConfigs: - - domain: a.example.com - search_policy: no_search - - domain: a.example.com - search_policy: full_search - -Table of possible outcomes -.......................... - -In the following table, user `uA` on backend A is searching for user `uB` on team `tB` on backend B. - -Any of the flags set for searching users on the same backend are ignored. - -It’s worth nothing that if two users are on two separate backend, they are also guaranteed to be on two separate teams, as teams can not spread across backends. - -+-------------------------+---------------------------------------------+---------------------------------------------+----------------------------------+--------------------------------------+ -| Who is searching | Backend B flag `FederatedUserSearchPolicy` | Team `tB`'s flag `SearchVisibilityInbound` | Result of exact search for `uB` | Result of full-text search for `uB` | -+=========================+=============================================+=============================================+==================================+======================================+ -| user `uA` on backend A | `no_search` | Irrelevant | Not found | Not found | -+-------------------------+---------------------------------------------+---------------------------------------------+----------------------------------+--------------------------------------+ -| user `uA` on backend A | `exact_handle_search` | Irrelevant | Found | Not found | -+-------------------------+---------------------------------------------+---------------------------------------------+----------------------------------+--------------------------------------+ -| user `uA` on backend A | `full_search` | SearchableByOwnTeam | Found | Not found | -+-------------------------+---------------------------------------------+---------------------------------------------+----------------------------------+--------------------------------------+ -| user `uA` on backend A | `full_search` | SearchableByAllTeams | Found | Found | -+-------------------------+---------------------------------------------+---------------------------------------------+----------------------------------+--------------------------------------+ - -Changing the settings for a given team -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -If you need to change searchabilility for a specific team (rather than the entire backend, as above), you need to make specific calls to the API. - -Team searchVisibility -..................... - -The team flag `searchVisibility` affects the outbound search of user searches. - -If it is set to `no-name-outside-team` for a team then all users of that team will no longer be able to find users that are not part of their team when searching. - -This also includes finding other users by by providing their exact handle. By default it is set to `standard`, which doesn't put any additional restrictions to outbound searches. - -The setting can be changed via endpoint (for more details on how to make the API calls with `curl`, read further): - -.. code:: - - GET /teams/{tid}/search-visibility - -- Shows the current TeamSearchVisibility value for the given team - - PUT /teams/{tid}/search-visibility - -- Set specific search visibility for the team - - pull-down-menu "body": - "standard" - "no-name-outside-team" - -The team feature flag `teamSearchVisibility` determines whether it is allowed to change the `searchVisibility` setting or not. - -The default is `disabled-by-default`. - -.. note:: - - Whenever this feature setting is disabled the `searchVisibility` will be reset to standard. - -The default setting that applies to all teams on the instance can be defined at configuration - -.. code:: yaml - - settings: - featureFlags: - teamSearchVisibility: disabled-by-default # or enabled-by-default - -TeamFeature searchVisibilityInbound -................................... - -The team feature flag `searchVisibilityInbound` affects if the team's users are searchable by users from other teams. - -The default setting is `searchable-by-own-team` which hides users from search results by users from other teams. - -If it is set to `searchable-by-all-teams` then users of this team may be included in the results of search queries by other users. - -.. note:: - - The configuration of this flag does not affect search results when the search query matches the handle exactly. - - If the handle is provdided then any user on the instance can find users. - -This team feature flag can only by toggled by site-administrators with direct access to the galley instance (for more details on how to make the API calls with `curl`, read further): - -.. code:: - - PUT /i/teams/{tid}/features/search-visibility-inbound - -With JSON body: - -.. code:: json - - {"status": "enabled"} - -or - -.. code:: json - - {"status": "disabled"} - -Where `enabled` is equivalent to `searchable-by-all-teams` and `disabled` is equivalent to `searchable-by-own-team`. - -The default setting that applies to all teams on the instance can be defined at configuration. - -.. code:: yaml - - searchVisibilityInbound: - defaults: - status: enabled # OR disabled - -Individual teams can overwrite the default setting with API calls as per above. - -Making the API calls -.................... - -To make API calls to set an explicit configuration for` TeamSearchVisibilityInbound` per team, you first need to know the Team ID, which can be found in the team settings app. - -It is an `UUID` which has format like this `dcbedf9a-af2a-4f43-9fd5-525953a919e1`. - -In the following we will be using this Team ID as an example, please replace it with your own team id. - -Next find the name of a `galley` pod by looking at the output of running this command: - -.. code:: sh - - kubectl -n wire get pods - -The output will look something like this: - -.. code:: - - ... - galley-5f4787fdc7-9l64n ... - galley-migrate-data-lzz5j ... - ... - -Select any of the galley pods, for example we will use `galley-5f4787fdc7-9l64n`. - -Next, set up a port-forwarding from your local machine's port `9000` to the galley's port `8080` by running: - -.. code:: sh - - kubectl port-forward -n wire galley-5f4787fdc7-9l64n 9000:8080 - -Keep this command running until the end of these instuctions. - -Please run the following commands in a seperate terminal while keeping the terminal which establishes the port-forwarding open. - -To see team's current setting run: - -.. code:: sh - - curl -XGET http://localhost:9000/i/teams/dcbedf9a-af2a-4f43-9fd5-525953a919e1/features/searchVisibilityInbound - - # {"lockStatus":"unlocked","status":"disabled"} - -Where `disabled` corresponds to `SearchableByOwnTeam` and enabled corresponds to `SearchableByAllTeams`. - -To change the `TeamSearchVisibilityInbound` to `SearchableByAllTeams` for the team run: - -.. code:: sh - - curl -XPUT -H 'Content-Type: application/json' -d "{\"status\": \"enabled\"}" http://localhost:9000/i/teams/dcbedf9a-af2a-4f43-9fd5-525953a919e1/features/searchVisibilityInbound - -To change the TeamSearchVisibilityInbound to SearchableByOwnTeam for the team run: - -.. code:: sh - - curl -XPUT -H 'Content-Type: application/json' -d "{\"status\": \"disabled\"}" http://localhost:9000/i/teams/dcbedf9a-af2a-4f43-9fd5-525953a919e1/features/searchVisibilityInbound - - - -Configuring classified domains ------------------------------- - -As a backend administrator, if you want to control which other backends (identified by their domain) are "classified", - -change the following `galley` configuration in the `value.yaml.gotmpl` file of the wire-server chart: - -.. code:: yaml - - galley: - replicaCount: 1 - config: - ... - featureFlags: - ... - classifiedDomains: - status: enabled - config: - domains: ["domain-that-is-classified.link"] - ... - -This is not only a `backend` configuration, but also a `team` configuration/feature. - -This means that different combinations of configurations will have different results. - -Here is a table to navigate the possible configurations: - -+----------------------------------+---------------------------------------------+-------------------------------+------------------------+---------------------------------+ -| Backend Config enabled/disabled | Backend Config Domains | Team Config enabled/disabled | Team Config Domains | User's view | -+==================================+=============================================+===============================+========================+=================================+ -| Enabled | [domain1.example.com] | Not configured | Not configured | Enabled, [domain1.example.com] | -+----------------------------------+---------------------------------------------+-------------------------------+------------------------+---------------------------------+ -| Enabled | [domain1.example.com][domain1.example.com] | Enabled | Not configured | Enabled, [domain1.example.com] | -+----------------------------------+---------------------------------------------+-------------------------------+------------------------+---------------------------------+ -| Enabled | [domain1.example.com] | Enabled | [domain2.example.com] | Enabled, Undefined | -+----------------------------------+---------------------------------------------+-------------------------------+------------------------+---------------------------------+ -| Enabled | [domain1.example.com] | Disabled | Anything | Undefined | -+----------------------------------+---------------------------------------------+-------------------------------+------------------------+---------------------------------+ -| Disabled | Anything | Not configured | Not configured | Disabled, no domains | -+----------------------------------+---------------------------------------------+-------------------------------+------------------------+---------------------------------+ -| Disabled | Anything | Enabled | [domain2.example.com] | Undefined | -+----------------------------------+---------------------------------------------+-------------------------------+------------------------+---------------------------------+ - -The table assumes the following: - -* When backend level config says that this feature is enabled, it is illegal to not specify domains at the backend level. -* When backend level config says that this feature is disabled, the list of domains is ignored. -* When team level feature is disabled, the accompanying domains are ignored. - diff --git a/docs/src/how-to/install/configure-federation.md b/docs/src/how-to/install/configure-federation.md new file mode 100644 index 0000000000..69396c92b5 --- /dev/null +++ b/docs/src/how-to/install/configure-federation.md @@ -0,0 +1,532 @@ +(configure-federation)= +# Configure Wire-Server for Federation + +See also {ref}`federation-understand`, which explains the architecture and concepts. + +```{note} +The Federation development is work in progress. +``` + +## Summary of necessary steps to configure federation + +The steps needed to configure federation are as follows and they will be +detailed in the sections below: + +- Choose a backend domain name + +- DNS setup for federation (including an `SRV` record) + +- Generate and configure TLS certificates: + + - server certificates + - client certificates + - a selection of CA certificates you trust when interacting with + other backends + +- Configure helm charts : federator and ingress and webapp subcharts + +- Test that your configurations work as expected. + +(choose-backend-domain)= +## Choose a Backend Domain + +As of the release \[helm chart 0.129.0, Wire docker version 2.94.0\] from +2020-12-15, `federationDomain` is a mandatory configuration setting, which +defines the {ref}`backend domain ` of your +installation. Regardless of whether you want to enable federation for a backend +or not, you must decide what its domain is going to be. This helps in keeping +things simpler across all components of Wire and also enables to turn on +federation in the future if required. + +It is highly recommended that this domain is configured as something +that is controlled by the administrator/operator(s). The actual servers +do not need to be available on this domain, but you MUST be able to set +an SRV record for `_wire-server-federator._tcp.` that +informs other wire-server backends where to find your actual servers. + +**IMPORTANT**: Once this option is set, it cannot be changed without +breaking experience for all the users which are already using the +backend. + +(consequences-backend-domain)= +## Consequences of the choice of a backend domain + +- You need control over a specific subdomain of this backend domain + (to set an SRV DNS record as explained in the next section). Without + this control, you cannot federate with anyone. + +- This backend domain becomes part of the underlying identity of all + users on your servers. + + Example: Let\'s say you choose `example.com` as your backend + domain. Your user known to you as Alice, and known on your + server with ID `ac41a202-2555-11ec-9341-00163e5e6c00` will + become known for other servers you federate with as + + ``` json + { + "user": { + "id": "ac41a202-2555-11ec-9341-00163e5e6c00", + "domain": "example.com" + } + } + ``` + +- This domain is shown in the User Interface + alongside user information. + + Example: Using the same example as above, for backends you + federate with, Alice would be displayed with the + human-readable username `@alice@example.com` for users on + other backends. + +```{warning} +*Changing* the backend domain after existing user +activity with a client version (versions later than May/June 2021) +will lead to undefined behaviour (untested, not accounted for during +development) on some or all client platforms (Web, Android, iOS) for +those users: It is possible your clients could crash, or lose part of +their data about themselves or other users and conversations, or +otherwise exhibit unexpected behaviour. If at all possible, do not +change this backend domain. We do not intend to provide support if you +change the backend domain. +``` + + +(dns-configure-federation)= +## DNS setup for federation + +### SRV record + +One prerequisite to enable federation is an [SRV +record](https://en.wikipedia.org/wiki/SRV_record) as defined in [RFC +2782](https://datatracker.ietf.org/doc/html/rfc2782) that needs to be +set up to allow the wire-server to be discovered by other Wire backends. +See the documentation on +{ref}`discovery in federation` for +more information on the role of discovery in federation. + +The fields of the SRV record need to be populated as follows + +- `service`: `wire-server-federator` +- `proto`: `tcp` +- `name`: \ +- `TTL`: e.g. 600 (10 minutes) in an initial phase. This can be set to + a higher value (e.g. 86400) if your systems are stable and DNS + records don\'t change a lot. +- `priority`: anything. A good default value would be 0 +- `weight`: \>0 for your server to be reachable. A good default value + could be 10 +- `port`: `443` +- `target`: the infrastructure domain + +To give an example, assuming + +- your federation + {ref}`Backend Domain ` is `example.com` +- your domains for other services already set up follow the convention + `.wire.example.org` + +then your federation +{ref}`Infrastructure Domain ` +would be `federator.wire.example.org`. + +The SRV record would look as follows: + +``` bash +# _service._proto.name. ttl IN SRV priority weight port target. +_wire-server-federator._tcp.example.com. 600 IN SRV 0 10 443 federator.wire.example.org. +``` + +### DNS A record for the federator + +Background: `federator` is the server component responsible for incoming +and outgoing requests to other backend; but it is proxied on the +incoming requests by the ingress component on kubernetes as shown in +{ref}`Federation Architecture` + +As mentioned in {ref}`DNS setup for Helm`, you also need a `federator.` record, which, +alongside your other DNS records that point to the ingress component, +also needs to point to the IP of your ingress, i.e. the IP you want to +provide services on. + +(federation-certificate-setup)= +## Generate and configure TLS server and client certificates + +Are your servers on the public internet? Then you have the option of +using TLS certificates from [Let\'s encrypt](https://letsencrypt.org/). +In such a case go to subsection (A). If your servers are not on the +public internet or you would like to use your own CA, go to subsection +(B). + +```{admonition} Note + +As of January 2023, we\'re using the +[hs-tls](https://hackage.haskell.org/package/tls) library for outgoing TLS +connections to other backends, which only supports P256 for ECDSA keys. +Therefore, we have specified a [key size of 256 +bits](https://github.com/wireapp/wire-server/blob/096c48c1f9b6b01572c737bd296dddd7cb5ddabb/charts/nginx-ingress-services/templates/certificate_federator.yaml) +with the use of let\'s encrypt (section A below, you don\'t need to do +anything further). The key size will be visible when inspecting your +certificate as a block looking similar to the following: + + Subject Public Key Info: + Public Key Algorithm: id-ecPublicKey + Public-Key: (256 bit) + ASN1 OID: prime256v1 + NIST CURVE: P-256 + +or: + + Subject Public Key Info: + Public Key Algorithm: rsaEncryption + RSA Public-Key: (2048 bit) + +If you create your own certificates, and use ECDSA as the algorithm, +please ensure you configure a key size of 256 for the time being (There +are no restrictions to key sizes if you\'re using RSA keys, but key +sizes larger than 3000 bit are recommended). + +For details on cipher configuration, see {ref}`tls`. + +Improvements to the TLS setup are planned (TLS 1.3 support; no +restrictions on key sizes anymore), those are tracked internally under +FS-33 and FS-49 (tickets only visible to Wire employees). + +``` + + +### (A) Let\'s encrypt TLS server and client certificate generation and renewal + +The following will make use of [Let\'s +encrypt](https://letsencrypt.org/) for both server certificates (used +when someone sends a request to your `federator.`) and +client certificates (used for making outgoing requests to other +backends). + +For that, you need to have +[jetstack/cert-manager](https://github.com/jetstack/cert-manager) +installed. You can follow the helm chart installation +[here](https://cert-manager.io/docs/installation/helm/). + +Once you have cert-manager, adjust the email address below, then set the +following in the nginx-ingress-services overrides: + +``` yaml +# override values for nginx-ingress-services +# (e.g. under ./helm_vars/nginx-ingress-services/values.yaml) +tls: + useCertManager: true + +certManager: + inTestMode: false + certmasterEmail: "certificates@example.com" +``` + +``` yaml +# override values for wire-server +# (e.g. under ./helm_vars/wire-server/values.yaml) +federator: + tls: + useSharedFederatorSecret: true +``` + +You can now skip section (B) and go to Configure CA certificates you +trust when interacting with other backends. + +### (B) Manual server and client certificates + +Use your usual method of obtaining X.509 certificates for your {ref}`federation +infrastructure domain ` (alongside the other domains needed for a +wire-server installation). + +You can use one single certificate and key for both server and client +certificate use. + +```{note} +Due to a limitation of the TLS library in use +for federation ([hs-tls](https://github.com/vincenthz/hs-tls)), only +some ciphers are supported. Moving to an openssl-based library is +planned, which will provide support for a wider range of ciphers. +``` + +Your certificates need to have the \"Server\" and \"Client\" key usage +listed among the X509 extensions: + +``` bash +# inspect your certificate: +openssl x509 -inform pem -noout -text < your-certificate.pem +``` + +``` bash +X509v3 extensions: + X509v3 Key Usage: critical + Digital Signature, Key Encipherment + X509v3 Extended Key Usage: + TLS Web Server Authentication, TLS Web Client Authentication +``` + +And your {ref}`federation infrastructure domain ` (e.g. +`federator.wire.example.com` from the running example) needs to either figure +explictly in the list of your SAN (Subject Alternative Name): + +``` bash +X509v3 Subject Alternative Name: + DNS:federator.wire.example.com, DNS:nginz-https.wire.example.com, ... +``` + +Or you need to have a wildcard certificate that includes it: + +``` bash +X509v3 Subject Alternative Name: critical + DNS:*.wire.example.com +``` + +Configure the *client certificate* and *private key* inside +wire-server/federator: + +``` yaml +# override values for wire-server +# (e.g. under ./helm_vars/wire-server/values.yaml or helm_vars/wire-server/secrets.yaml) +federator: + clientCertificateContents: | + -----BEGIN CERTIFICATE----- + ..... + -----END CERTIFICATE----- + clientPrivateKeyContents: | + -----BEGIN RSA PRIVATE KEY----- + ..... + -----END RSA PRIVATE KEY----- +``` + +The *server certificate* and *private key* need to be configured in +`nginx-ingress-services`. Those are used for all of the services, not +just the federator component. If you have installed wire-server before +without federation, server certificates may already be configured +*(though you probably need to create new certificates to include the +federation infrastructure domain if you\'re not making use of wildcard +certificates)*. Server certificates go here: + +``` yaml +# override values for nginx-ingress-services +# (e.g. under ./helm_vars/nginx-ingress-services/secrets.yaml) +secrets: + tlsWildcardCert: | + -----BEGIN CERTIFICATE----- + ... + -----END CERTIFICATE----- + + tlsWildcardKey: | + -----BEGIN RSA PRIVATE KEY ----- + ... + -----END RSA PRIVATE KEY----- +``` + +### Configure CA certificates you trust when interacting with other backends + +If you want to federate with servers at `othercompany.example.com`, then +you need to trust the CA (Certificate Authority) certificate that +`othercompany.example.com` has used to sign its client certificates. + +They need to be set both for the nginx-ingress-services and the +wire-server chart. + +``` yaml +# override values for nginx-ingress-services +# (e.g. under ./helm_vars/nginx-ingress-services/values.yaml) +secrets: + tlsClientCA: | + -----BEGIN CERTIFICATE----- + ... + -----END CERTIFICATE----- + -----BEGIN CERTIFICATE----- + ... + -----END CERTIFICATE----- +``` + +``` yaml +# override values for wire-server +# (e.g. under ./helm_vars/wire-server/values.yaml) +federator: + remoteCAContents: | + -----BEGIN CERTIFICATE----- + ... + -----END CERTIFICATE----- + -----BEGIN CERTIFICATE----- + ... + -----END CERTIFICATE----- +``` + +### Tell parties you intend to federate with about your certificates + +The backends you want to federate with should add your (or Let\'s +Encrypt\'s) CA to their store, so you should give them your CA +certificate, or tell them to use the appropriate Let\'s Encrypt root +certificate. + +## Configure helm charts: federator and ingress and webapp subcharts + +### Set your chosen backend domain + +Read {ref}`choose-backend-domain` again, then +set the backend domain three times to the same value in the subcharts +cargohold, galley and brig. You also need to set `enableFederator` to +`true`. + +``` yaml +# override values for wire-server +# (e.g. under ./helm_vars/wire-server/values.yaml) +galley: + config: + enableFederator: true + settings: + federationDomain: example.com # your chosen "backend domain" + +brig: + config: + enableFederator: true + optSettings: + setFederationDomain: example.com # your chosen "backend domain" + +cargohold: + config: + enableFederator: true + settings: + federationDomain: example.com # your chosen "backend domain" +``` + +### Configure federator process to run and allow incoming traffic + +For federation to work, the `federator` subchart of wire-server has to +be enabled: + +``` yaml +# override values for wire-server +# (e.g. under ./helm_vars/wire-server/values.yaml) +tags: + federator: true +``` + +You also need to enable ingress-\>federator proxying and configure the +charts to use the DNS you configured as a target in +{ref}`dns-configure-federation` above + +``` yaml +# override values for nginx-ingress-services +# (e.g. under ./helm_vars/nginx-ingress-services/values.yaml) +federator: + enabled: true + +config: + dns: + federator: federator.wire.example.org # set this to your "infra" domain +``` + +### Configure the validation depth when handling client certificates + +By default, `verify_depth` is `1`, meaning that in order to validate an +incoming request from another backend, this backend needs to have a +client certificate that is directly (without any intermediate +certificates) signed by a CA certificate from the trust store. + +Example: If you trust a CA `root` which signs an intermediate +`intermediate-1` which in turn signs `intermediate-2` which finally +signs `leaf`, and `leaf` is used during mutual TLS when validating +incoming requests, then `verify_depth` would need to be set to `3`. + +``` yaml +# nginx-ingress-services/values.yaml +tls: + # the validation depth between a federator client certificate and tlsClientCA + verify_depth: 3 # default: 1 +``` + +(configure-federation-allow-list)= +### Configure the allow list + +By default, federation is turned off (allow list set to the empty list): + +``` yaml +# override values for wire-server +# (e.g. under ./helm_vars/wire-server/values.yaml) +federator: + config: + optSettings: + federationStrategy: + allowedDomains: [] +``` + +You can choose to federate with a specific list of allowed backends: + +``` yaml +# override values for wire-server +# (e.g. under ./helm_vars/wire-server/values.yaml) +federator: + config: + optSettings: + federationStrategy: + allowedDomains: + - example.com + - example.org +``` + +Alternatively, you can federate with everyone: + +``` yaml +# override values for wire-server +# (e.g. under ./helm_vars/wire-server/values.yaml) +federator: + config: + optSettings: + federationStrategy: + allowAll: true +``` + +## Applying all configuration changes + +Depending on your installation method and time you initially installed +your first version of wire-server, commands to run to apply all of the +above configrations may vary. You want to ensure that you upgrade the +`nginx-ingress-services` and `wire-server` helm charts at a minimum. + +## Manually test that your configurations work as expected + +### Manually test DNS + +If you use `dig` to check for SRV records, use e.g.: + + dig +short SRV _wire-server-federator._tcp.wire.example.com + +Should yield something like: + + 0 10 443 federator.wire.example.com. + +The actual target: + + dig +short federator.wire.example.com + +should also point to an IP address: + + 1.2.3.4 # of course you should get a valid IP here + +Ensure that the IP matches where your backend ingress runs. + +### Manually test certificates + +Refer to {ref}`how-to-see-tls-certs` and set +DOMAIN to your +{ref}`federation infrastructure domain `. They should include your domain as part of the SAN (Subject +Alternative Names) and not have expired. + +### Manually test that federation works + +Prerequisites: + +- You need two backends with federation configured and enabled. +- They both need to have each other in the allow list. +- They both need to trust each other\'s CA certificate. + +Create user accounts on both backends. + +With one user, search for the other user using the +`@username-1@example.com` syntax in the UI search field of the webapp. diff --git a/docs/src/how-to/install/configure-federation.rst b/docs/src/how-to/install/configure-federation.rst deleted file mode 100644 index d9aa10d8c8..0000000000 --- a/docs/src/how-to/install/configure-federation.rst +++ /dev/null @@ -1,471 +0,0 @@ -.. _configure-federation: - -Configure Wire-Server for federation -===================================== - -Background ------------ - -Please first understand the current scope and aim of wire-server federation by reading :ref:`Understanding federation `. - -.. warning:: As of October 2021, federation implementation is still work in progress. Many features are not implemented yet, - and it should be considered "alpha": stability, and upgrade compatibility are not guaranteed. - -Summary of necessary steps to configure federation --------------------------------------------------- - -The steps needed to configure federation are as follows and they will be detailed in the sections below: - -* Choose a backend domain name -* DNS setup for federation (including an ``SRV`` record) -* Generate and configure TLS certificates: - - * server certificates - * client certificates - * a selection of CA certificates you trust when interacting with other backends - -* Configure helm charts : federator and ingress and webapp subcharts -* Test that your configurations work as expected. - -.. _choose-backend-domain: - -Choose a :ref:`Backend Domain Name` ------------------------------------------------------------- - -As of the release [helm chart 0.129.0, Wire docker version 2.94.0] from -2020-12-15, a Backend Domain (set as ``federationDomain`` in configuration) is a -mandatory configuration setting. Regardless of whether you want to enable -federation for a backend or not, you must decide what its domain is going to be. -This helps in keeping things simpler across all components of Wire and also -enables to turn on federation in the future if required. - -It is highly recommended that this domain is configured as -something that is controlled by the administrator/operator(s). The actual -servers do not need to be available on this domain, but you MUST be able to set -an SRV record for ``_wire-server-federator._tcp.`` that -informs other wire-server backends where to find your actual servers. - -**IMPORTANT**: Once this option is set, it cannot be changed without breaking -experience for all the users which are already using the backend. - -.. _consequences-backend-domain: - -Consequences of the choice of Backend Domain --------------------------------------------- - -* You need control over a specific subdomain of this Backend Domain (to set an - SRV DNS record as explained in the next section). Without this control, you cannot federate with anyone. - -* This Backend Domain becomes part of the underlying identify of all users on - your servers. - - * Example: Let's say you choose ``example.com`` as your Backend Domain. - Your user known to you as Alice, and known on your server with ID - ``ac41a202-2555-11ec-9341-00163e5e6c00`` will become known for other - servers you federate with as - - .. code:: json - - { - "user": { - "id": "ac41a202-2555-11ec-9341-00163e5e6c00", - "domain": "example.com" - } - } - -* As of October 2021, this domain is used in the User Interface alongside user information. - (This may or may not change in the future) - - * Example: Using the same example as above, for backends you federate with, Alice - would be displayed with the human-readable username ``@alice@example.com`` - for users on other backends. - -.. warning :: - - As of October 2021, *changing* this Backend Domain after existing user activity - with a recent version (versions later than ~May/June 2021) will lead to undefined - behaviour (untested, not accounted for during development) on some or all - client platforms (Web, Android, iOS) for those users: It is possible your - clients could crash, or lose part of their data about themselves or other - users and conversations, or otherwise exhibit unexpected behaviour. If at - all possible, do not change this backend domain. We do not intend to - provide support if you change the backend domain. - - -.. _dns-configure-federation: - -.. include:: ./includes/dns-federation.rst - -Generate and configure TLS server and client certificates ---------------------------------------------------------- - -Are your servers on the public internet? Then you have the option of using TLS certificates from `Let's encrypt -`__. In such a case go to subsection (A). If your servers are not on the public internet -or you would like to use your own CA, go to subsection (B). - -.. note:: - - As of Jan 2022, we're using the `hs-tls ` library for outgoing TLS connections to other backends, which only supports P256 for ECDSA keys. - Therefore, we have specified a `key size of 256 bits `__ with the use of let's encrypt (section A below, you don't need to do anything further). The key size will be visible when inspecting your certificate as a block looking similar to the following:: - - Subject Public Key Info: - Public Key Algorithm: id-ecPublicKey - Public-Key: (256 bit) - ASN1 OID: prime256v1 - NIST CURVE: P-256 - - or:: - - Subject Public Key Info: - Public Key Algorithm: rsaEncryption - RSA Public-Key: (2048 bit) - - If you create your own certificates, and use ECDSA as the algorithm, please ensure you configure a key size of 256 for the time being (There are no restrictions to key sizes if you're using RSA keys, but key sizes larger than 3000 bit are recommended). - - - For details on cipher configuration, see :ref:`tls`. - - Improvements to the TLS setup are planned (TLS 1.3 support; no restrictions on key sizes anymore), those are tracked internally under FS-33 and FS-49 (tickets only visible to Wire employees). - - -(A) Let's encrypt TLS server and client certificate generation and renewal -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -The following will make use of `Let's encrypt `__ for both server certificates (used when -someone sends a request to your ``federator.``) and client certificates (used for making outgoing requests -to other backends). - -For that, you need to have `jetstack/cert-manager `__ installed. You can -follow the helm chart installation `here `__. - -Once you have cert-manager, adjust the email address below, then set the following in the nginx-ingress-services overrides: - -.. code:: yaml - - # override values for nginx-ingress-services - # (e.g. under ./helm_vars/nginx-ingress-services/values.yaml) - tls: - useCertManager: true - - certManager: - inTestMode: false - certmasterEmail: "certificates@example.com" - -.. code:: yaml - - # override values for wire-server - # (e.g. under ./helm_vars/wire-server/values.yaml) - federator: - tls: - useSharedFederatorSecret: true - -You can now skip section (B) and go to Configure CA certificates you trust when interacting with other backends. - -(B) Manual server and client certificates -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Use your usual method of obtaining X.509 certificates for your :ref:`federation infra domain -` (alongside the other domains needed for a wire-server installation). - -You can use one single certificate and key for both server and client certificate use. - -.. note:: - - Currently (October 2021), due to a limitation of the TLS library in use for federation (`hs-tls - `__), only some ciphers are supported. Moving to an - openssl-based library is planned, which will provide support for a wider range of ciphers. - -.. - TODO: provide a list of supported ciphers and signature algorithms. - -Your certificates need to have the "Server" and "Client" key usage listed among the X509 extensions: - -.. code:: bash - - # inspect your certificate: - openssl x509 -inform pem -noout -text < your-certificate.pem - -.. code:: bash - - X509v3 extensions: - X509v3 Key Usage: critical - Digital Signature, Key Encipherment - X509v3 Extended Key Usage: - TLS Web Server Authentication, TLS Web Client Authentication - -And your :ref:`federation infra domain ` (e.g. ``federator.wire.example.com`` -from the running example) needs to either figure explictly in the list of your SAN (Subject -Alternative Name): - -.. code:: bash - - X509v3 Subject Alternative Name: - DNS:federator.wire.example.com, DNS:nginz-https.wire.example.com, ... - -Or you need to have a wildcard certificate that includes it: - -.. code:: bash - - X509v3 Subject Alternative Name: critical - DNS:*.wire.example.com - -Configure the *client certificate* and *private key* inside wire-server/federator: - -.. code:: yaml - - # override values for wire-server - # (e.g. under ./helm_vars/wire-server/values.yaml or helm_vars/wire-server/secrets.yaml) - federator: - clientCertificateContents: | - -----BEGIN CERTIFICATE----- - ..... - -----END CERTIFICATE----- - clientPrivateKeyContents: | - -----BEGIN RSA PRIVATE KEY----- - ..... - -----END RSA PRIVATE KEY----- - -The *server certificate* and *private key* need to be configured in ``nginx-ingress-services``. Those are used for all -of the services, not just the federator component. If you have installed -wire-server before without federation, server certificates may already be configured *(though you probably need to create -new certificates to include the federation infra domain if you're not making use of wildcard certificates)*. Server -certificates go here: - -.. code:: yaml - - # override values for nginx-ingress-services - # (e.g. under ./helm_vars/nginx-ingress-services/secrets.yaml) - secrets: - tlsWildcardCert: | - -----BEGIN CERTIFICATE----- - ... - -----END CERTIFICATE----- - - tlsWildcardKey: | - -----BEGIN RSA PRIVATE KEY ----- - ... - -----END RSA PRIVATE KEY----- - - -Configure CA certificates you trust when interacting with other backends -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -If you want to federate with servers at ``othercompany.example.com``, then you need to trust the CA (Certificate Authority) -certificate that ``othercompany.example.com`` has used to sign its client certificates. - -They need to be set both for the nginx-ingress-services and the wire-server chart. - -.. code:: yaml - - # override values for nginx-ingress-services - # (e.g. under ./helm_vars/nginx-ingress-services/values.yaml) - secrets: - tlsClientCA: | - -----BEGIN CERTIFICATE----- - ... - -----END CERTIFICATE----- - -----BEGIN CERTIFICATE----- - ... - -----END CERTIFICATE----- - -.. code:: yaml - - # override values for wire-server - # (e.g. under ./helm_vars/wire-server/values.yaml) - federator: - remoteCAContents: | - -----BEGIN CERTIFICATE----- - ... - -----END CERTIFICATE----- - -----BEGIN CERTIFICATE----- - ... - -----END CERTIFICATE----- - -Tell parties you intend to federate with about your certificates -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -The backends you want to federate with should add your (or Let's Encrypt's) CA -to their store, so you should give them your CA certificate, or tell them to use -the appropriate Let's Encrypt root certificate. - -Configure helm charts: federator and ingress and webapp subcharts ------------------------------------------------------------------ - -Set your chosen backend domain -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Read :ref:`choose-backend-domain` again, then set the backend domain three times -to the same value in the subcharts cargohold, galley and brig. You also need to -set ``enableFederator`` to ``true``. - -.. code:: yaml - - # override values for wire-server - # (e.g. under ./helm_vars/wire-server/values.yaml) - galley: - config: - enableFederator: true - settings: - federationDomain: example.com # your chosen "backend domain" - - brig: - config: - enableFederator: true - optSettings: - setFederationDomain: example.com # your chosen "backend domain" - - cargohold: - config: - enableFederator: true - settings: - federationDomain: example.com # your chosen "backend domain" - -Configure the webapp to enable federation and set your chosen backend domain one more time -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. code:: yaml - - # override values for wire-server - # (e.g. under ./helm_vars/wire-server/values.yaml) - webapp: - envVars: - FEATURE_FEDERATION_DOMAIN: "example.com" # your chosen "backend domain" - FEATURE_ENABLE_FEDERATION: "true" - -Configure federator process to run and allow incoming traffic -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -For federation to work, the ``federator`` subchart of wire-server has to be enabled: - -.. code:: yaml - - # override values for wire-server - # (e.g. under ./helm_vars/wire-server/values.yaml) - tags: - federator: true - -You also need to enable ingress->federator proxying and configure the charts to use the DNS you configured as a target -in :ref:`dns-configure-federation` above - -.. code:: yaml - - # override values for nginx-ingress-services - # (e.g. under ./helm_vars/nginx-ingress-services/values.yaml) - federator: - enabled: true - - config: - dns: - federator: federator.wire.example.org # set this to your "infra" domain - -Configure the validation depth when handling client certificates -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -By default, ``verify_depth`` is ``1``, meaning that in order to validate an incoming request from another backend, this backend needs to have a client certificate that is directly (without any intermediate certificates) signed by a CA certificate from the trust store. - -Example: If you trust a CA ``root`` which signs an intermediate ``intermediate-1`` which in turn signs ``intermediate-2`` which finally signs ``leaf``, and ``leaf`` is used during mutual TLS when validating incoming requests, then ``verify_depth`` would need to be set to ``3``. - -.. code:: yaml - - # nginx-ingress-services/values.yaml - tls: - # the validation depth between a federator client certificate and tlsClientCA - verify_depth: 3 # default: 1 - -Configure the allow list -^^^^^^^^^^^^^^^^^^^^^^^^ - -By default, federation is turned off (allow list set to the empty list): - -.. code:: yaml - - # override values for wire-server - # (e.g. under ./helm_vars/wire-server/values.yaml) - federator: - config: - optSettings: - federationStrategy: - allowedDomains: [] - -You can choose to federate with a specific list of allowed backends: - -.. code:: yaml - - # override values for wire-server - # (e.g. under ./helm_vars/wire-server/values.yaml) - federator: - config: - optSettings: - federationStrategy: - allowedDomains: - - example.com - - example.org - -Alternatively, you can federate with everyone: - -.. code:: yaml - - # override values for wire-server - # (e.g. under ./helm_vars/wire-server/values.yaml) - federator: - config: - optSettings: - federationStrategy: - allowAll: true - - -Applying all configuration changes ----------------------------------- - -Depending on your installation method and time you initially installed your first version of wire-server, commands to -run to apply all of the above configrations may vary. You want to ensure that you upgrade the ``nginx-ingress-services`` -and ``wire-server`` helm charts at a minimum. - -Manually test that your configurations work as expected -------------------------------------------------------- - -Manually test DNS -^^^^^^^^^^^^^^^^^ - -If you use ``dig`` to check for SRV records, use e.g.:: - - dig +short SRV _wire-server-federator._tcp.wire.example.com - -Should yield something like:: - - 0 10 443 federator.wire.example.com. - -The actual target:: - - dig +short federator.wire.example.com - -should also point to an IP address:: - - 1.2.3.4 # of course you should get a valid IP here - -Ensure that the IP matches where your backend ingress runs. - -Manually test certificates -^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Refer to :ref:`how-to-see-tls-certs` and set DOMAIN to your :ref:`federation infra domain `. They -should include your domain as part of the SAN (Subject Alternative Names) and not have expired. - -Manually test that federation "works" -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Prerequisites: - -* You need two backends with federation configured and enabled. -* They both need to have each other in the allow list. -* They both need to trust each other's CA certificate. - -Create user accounts on both backends. - -With one user, search for the other user using the ``@username-1@example.com`` syntax in the UI search field of the -webapp. - -.. - FUTUREWORK - * A way to validate overall helm configuration to be consistent - * A way to test client certificates. diff --git a/docs/src/how-to/install/dependencies.md b/docs/src/how-to/install/dependencies.md new file mode 100644 index 0000000000..43ad7f7d90 --- /dev/null +++ b/docs/src/how-to/install/dependencies.md @@ -0,0 +1,69 @@ +(dependencies)= + +# Dependencies on operator's machine + +In order to operate a wire-server installation, you'll need a bunch of software +like Ansible, `kubectl` and Helm. + +Together with a matching checkout of the wire-server-deploy repository, +containing the Ansible Roles and Playbooks, you should be good to go. + +Checkout the repository, including its submodules: + +``` +git clone --branch master https://github.com/wireapp/wire-server-deploy.git +cd wire-server-deploy +git submodule update --init --recursive +``` + +We provide a container containing all needed tools for setting up and +interacting with a wire-server cluster. + +Ensure you have Docker >= 20.10.14 installed, as the glibc version used is +incompatible with older container runtimes. + +Your Distro might ship an older version, so best see [how to install docker](https://docker.com). + +To bring the tools in scope, we run the container, and mount the local `wire-server-deploy` +checkout into it. + +Replace the container image tag with the commit id your `wire-server-deploy` +checkout is pointing to. + +``` +WSD_COMMIT_ID=cdc1c84c1a10a4f5f1b77b51ee5655d0da7f9518 # set me +WSD_CONTAINER=quay.io/wire/wire-server-deploy:$WSD_COMMIT_ID + +sudo docker run -it --network=host \ + -v ${SSH_AUTH_SOCK:-nonexistent}:/ssh-agent \ + -v $HOME/.ssh:/root/.ssh \ + -v $PWD:/wire-server-deploy \ + -e SSH_AUTH_SOCK=/ssh-agent \ + $WSD_CONTAINER bash + +# Inside the container +bash-4.4# ansible --version +ansible 2.9.12 +``` + +Once you're in there, you can move on to {ref}`installing kubernetes `. + +## (Alternative) Installing dependencies using Direnv and Nix + +```{warning} +This is an alternative approach to the above "wrapping container" one, which you should only use if you can't get above setup to work. +``` + +1. [Install Nix](https://nixos.org/download.html) +2. [Install Direnv](https://direnv.net/docs/installation.html) +3. [Optionally install the Wire cachix cache to download binaries](https://app.cachix.org/cache/wire-server) + +Now, enabling `direnv` should install all the dependencies and add them to your `PATH`. Every time you `cd` into +the `wire-server-deploy` directory, the right dependencies will be available. + +``` +direnv allow + +ansible --version +ansible 2.9.12 +``` diff --git a/docs/src/how-to/install/dependencies.rst b/docs/src/how-to/install/dependencies.rst deleted file mode 100644 index 4c50f38d25..0000000000 --- a/docs/src/how-to/install/dependencies.rst +++ /dev/null @@ -1,74 +0,0 @@ -.. _dependencies: - -Dependencies on operator's machine --------------------------------------------------------------------- - -In order to operate a wire-server installation, you'll need a bunch of software -like Ansible, ``kubectl`` and Helm. - -Together with a matching checkout of the wire-server-deploy repository, -containing the Ansible Roles and Playbooks, you should be good to go. - -Checkout the repository, including its submodules: - -:: - - git clone --branch master https://github.com/wireapp/wire-server-deploy.git - cd wire-server-deploy - git submodule update --init --recursive - - -We provide a container containing all needed tools for setting up and -interacting with a wire-server cluster. - -Ensure you have Docker >= 20.10.14 installed, as the glibc version used is -incompatible with older container runtimes. - -Your Distro might ship an older version, so best see `how to install docker -`__. - -To bring the tools in scope, we run the container, and mount the local ``wire-server-deploy`` -checkout into it. - -Replace the container image tag with the commit id your ``wire-server-deploy`` -checkout is pointing to. - -:: - - WSD_COMMIT_ID=cdc1c84c1a10a4f5f1b77b51ee5655d0da7f9518 # set me - WSD_CONTAINER=quay.io/wire/wire-server-deploy:$WSD_COMMIT_ID - - sudo docker run -it --network=host \ - -v ${SSH_AUTH_SOCK:-nonexistent}:/ssh-agent \ - -v $HOME/.ssh:/root/.ssh \ - -v $PWD:/wire-server-deploy \ - -e SSH_AUTH_SOCK=/ssh-agent \ - $WSD_CONTAINER bash - - # Inside the container - bash-4.4# ansible --version - ansible 2.9.12 - -Once you're in there, you can move on to `installing kubernetes `__ - - -(Alternative) Installing dependencies using Direnv and Nix -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. warning:: - - This is an alternative approach to the above "wrapping container" one, which you should only use if you can't get above setup to work. - -1. `Install Nix `__ -2. `Install Direnv `__ -3. `Optionally install the Wire cachix cache to download binaries `__ - -Now, enabling ``direnv`` should install all the dependencies and add them to your ``PATH``. Every time you ``cd`` into -the ``wire-server-deploy`` directory, the right dependencies will be available. - -:: - - direnv allow - - ansible --version - ansible 2.9.12 diff --git a/docs/src/how-to/install/helm-prod.md b/docs/src/how-to/install/helm-prod.md new file mode 100644 index 0000000000..29045f1818 --- /dev/null +++ b/docs/src/how-to/install/helm-prod.md @@ -0,0 +1,208 @@ +(helm-prod)= + +# Installing wire-server (production) components using Helm + +```{note} +Code in this repository should be considered *beta*. As of 2020, we do not (yet) +run our production infrastructure on Kubernetes (but plan to do so soon). +``` + +## Introduction + +The following will install a version of all the wire-server components. These instructions are for reference, and may not set up what you would consider a production environment, due to the fact that there are varying definitions of 'production ready'. These instructions will cover what we consider to be a useful overlap of our users' production needs. They do not cover load balancing/distributing, using multiple datacenters, federating wire, or other forms of intercontinental/interplanetary distribution of the wire service infrastructure. If you deviate from these directions and need to contact us for support, please provide the deviations you made to fit your production environment along with your support request. + +Some of the instructions here will present you with two options: No AWS, and with AWS. The 'No AWS' instructions will not require any AWS infrastructure, but may have a reduced feature set. The 'with AWS' instructions will assume you have completed the setup procedures in {ref}`aws-prod`. + +### What will be installed? + +- wire-server (API) + : - user accounts, authentication, conversations + - assets handling (images, files, ...) + - notifications over websocket +- wire-webapp, a fully functioning web client (like `https://app.wire.com/`) +- wire-account-pages, user account management (a few pages relating to e.g. password reset procedures) + +### What will not be installed? + +- team-settings page +- SSO Capabilities + +Additionally, if you opt to do the 'No AWS' route, you will not get: + +- notifications over native push notifications via [FCM](https://firebase.google.com/docs/cloud-messaging/)/[APNS](https://developer.apple.com/notifications/) + +## Prerequisites + +You need to have access to a Kubernetes cluster running a Kubernetes version , and the `helm` local binary on your PATH. +Your Kubernetes cluster needs to have internal DNS services, so that wire-server can find it's databases. +You need to have docker on the machine you are using to perform this installation with, or a secure data path to a machine that runs docker. You will be using docker to generate security credentials for your wire installation. + +- If you want calling services, you need to have + + - FIXME + +- If you don't have a Kubernetes cluster, you have two options: + + - You can get access to a managed Kubernetes cluster with the cloud provider of your choice. + - You can install one if you have ssh access to a set of sufficiently large virtual machines, see {ref}`ansible-kubernetes` + +- If you don't have `helm` yet, see [Installing helm](https://helm.sh/docs/using_helm/#installing-helm). If you followed the instructions in {ref}`dependencies` should have helm installed already. + +Type `helm version`, you should, if everything is configured correctly, see a result similar this: + +``` +version.BuildInfo{Version:"v3.1.1", GitCommit:"afe70585407b420d0097d07b21c47dc511525ac8", GitTreeState:"clean", GoVersion:"go1.13.8"} +``` + +In case `kubectl version` shows both Client and Server versions, but `helm version` does not show a Server version, you may need to run `helm init`. The exact version matters less as long as both Client and Server versions match (or are very close). + +## Preparing to install charts from the internet with Helm + +If your environment is online, you need to add the remote wire Helm repository, to download wire charts. + +To enable the wire charts helm repository: + +```shell +helm repo add wire https://s3-eu-west-1.amazonaws.com/public.wire.com/charts +``` + +(You can see available helm charts by running `helm search repo wire/`. To see +new versions as time passes, you may need to run `helm repo update`) + +Great! Now you can start installing. + +There is a shell script for doing a version of the following procedure with Helm 22. For reference, examine [prod-setup.sh](https://github.com/wireapp/wire-server-deploy/blob/develop/bin/prod-setup.sh). + +## Watching changes as they happen + +Open a terminal and run: + +```shell +kubectl get pods -w +``` + +This will block your terminal and show some things happening as you proceed through this guide. Keep this terminal open and open a second terminal. + +## General installation notes + +```{note} +All helm and kubectl commands below can also take an extra `--namespace ` if you don't want to install into the default Kubernetes namespace. +``` + +## How to install charts that provide access to external databases + +Before you can deploy the helm charts that tell wire where external services +are, you need the 'values' and 'secrets' files for those charts to be +configured. Values and secrets YAML files provide helm charts with the settings +that are installed in Kubernetes. + +Assuming you have followed the procedures in the previous document, the values +and secrets files for cassandra, elasticsearch, and minio (if you are using it) +will have been filled in automatically. If not, examine the +`prod-values.example.yaml` files for each of these services in +values/\/, copy them to `values.yaml`, and then edit them. + +Once the values and secrets files for your databases have been configured, you +have to write a `values/databases-ephemeral/values.yaml` file to tell +databases-ephemeral what external database services you are using, and what +services you want databases-ephemeral to configure. We recommend you use the +'redis' component from this only, as the contents of redis are in fact +ephemeral. Look at the `values/databases-ephemeral/prod-values.example.yaml` +file + +Once you have values and secrets for your environment, open a terminal and run: + +```shell +helm upgrade --install cassandra-external wire/cassandra-external -f values/cassandra-external/values.yaml --wait +helm upgrade --install elasticsearch-external wire/elasticsearch-external -f values/elasticsearch-external/values.yaml --wait +helm upgrade --install databases-ephemeral wire/databases-ephemeral -f values/databases-ephemeral/values.yaml --wait +``` + +If you are using minio instead of AWS S3, you should also run: + +```shell +helm upgrade --install minio-external wire/minio-external -f values/minio-external/values.yaml --wait +``` + +## How to install fake AWS services for SNS / SQS + +AWS SNS is required to send notifications to clients. SQS is used to get notified of any devices that have discontinued using Wire (e.g. if you uninstall the app, the push notification token is removed, and the wire-server will get feedback for that using SQS). + +Note: *for using real SQS for real native push notifications instead, see also :ref:\`pushsns\`.* + +If you use the fake-aws version, clients will use the websocket method to receive notifications, which keeps connections to the servers open, draining battery. + +Open a terminal and run: + +```shell +cp values/fake-aws/prod-values.example.yaml values/fake-aws/values.yaml +helm upgrade --install fake-aws wire/fake-aws -f values/fake-aws/values.yaml --wait +``` + +You should see some pods being created in your first terminal as the above command completes. + +## Preparing to install wire-server + +As part of configuring wire-server, we need to change some values, and provide some secrets. We're going to copy the files for this to a new folder, so that you always have the originals for reference. + +```{note} +This part of the process makes use of overrides for helm charts. You may wish to read {ref}`understand-helm-overrides` first. +``` + +```shell +mkdir -p my-wire-server +cp values/wire-server/prod-secrets.example.yaml my-wire-server/secrets.yaml +cp values/wire-server/prod-values.example.yaml my-wire-server/values.yaml +``` + +## How to configure real SMTP (email) services + +In order for users to interact with their wire account, they need to receive mail from your wire server. + +If you are using a mail server, you will need to provide your authentication credentials before setting up wire. + +- Add your SMTP username in my-wire-server/values.yaml under `brig.config.smtp.username`. You may need to add an entry for username. +- Add your SMTP password is my-wire-server/secrets.yaml under `brig.secrets.smtpPassword`. + +## How to install fake SMTP (email) services + +If you are not making use of mail services, and are adding your users via some other means, you can use demo-smtp, as a placeholder. + +```shell +cp values/demo-smtp/prod-values.example.yaml values/demo-smtp/values.yaml +helm upgrade --install smtp wire/demo-smtp -f values/demo-smtp/values.yaml +``` + +You should see some pods being created in your first terminal as the above command completes. + +## How to install wire-server itself + +Open `my-wire-server/values.yaml` and replace `example.com` and other domains and subdomains with domains of your choosing. Look for the `# change this` comments. You can try using `sed -i 's/example.com//g' values.yaml`. + +1. If you are not using team settings, comment out `teamSettings` under `brig.config.externalURLs`. + +Generate some secrets: + +```shell +openssl rand -base64 64 | env LC_CTYPE=C tr -dc a-zA-Z0-9 | head -c 42 > my-wire-server/restund.txt +apt install docker-ce +sudo docker run --rm quay.io/wire/alpine-intermediate /dist/zauth -m gen-keypair -i 1 > my-wire-server/zauth.txt +``` + +1. Add the generated secret from my-wire-server/restund.txt to my-wire-serwer/secrets.yaml under `brig.secrets.turn.secret` +2. add **both** the public and private parts from zauth.txt to secrets.yaml under `brig.secrets.zAuth` +3. Add the public key from zauth.txt to secrets.yaml under `nginz.secrets.zAuth.publicKeys` + +Great, now try the installation: + +```shell +helm upgrade --install wire-server wire/wire-server -f my-wire-server/values.yaml -f my-wire-server/secrets.yaml --wait +``` + +(helmdns)= + +## DNS records + +```{eval-rst} +.. include:: includes/helm_dns-ingress-troubleshooting.inc.rst +``` diff --git a/docs/src/how-to/install/helm-prod.rst b/docs/src/how-to/install/helm-prod.rst deleted file mode 100644 index fb9b81841d..0000000000 --- a/docs/src/how-to/install/helm-prod.rst +++ /dev/null @@ -1,225 +0,0 @@ -.. _helm_prod: - -Installing wire-server (production) components using Helm -========================================================= - -.. note:: - - Code in this repository should be considered *beta*. As of 2020, we do not (yet) - run our production infrastructure on Kubernetes (but plan to do so soon). - -Introduction ------------- - -The following will install a version of all the wire-server components. These instructions are for reference, and may not set up what you would consider a production environment, due to the fact that there are varying definitions of 'production ready'. These instructions will cover what we consider to be a useful overlap of our users' production needs. They do not cover load balancing/distributing, using multiple datacenters, federating wire, or other forms of intercontinental/interplanetary distribution of the wire service infrastructure. If you deviate from these directions and need to contact us for support, please provide the deviations you made to fit your production environment along with your support request. - -Some of the instructions here will present you with two options: No AWS, and with AWS. The 'No AWS' instructions will not require any AWS infrastructure, but may have a reduced feature set. The 'with AWS' instructions will assume you have completed the setup procedures in :ref:`aws_prod`. - -What will be installed? -^^^^^^^^^^^^^^^^^^^^^^^ - -- wire-server (API) - - user accounts, authentication, conversations - - assets handling (images, files, ...) - - notifications over websocket -- wire-webapp, a fully functioning web client (like ``https://app.wire.com/``) -- wire-account-pages, user account management (a few pages relating to e.g. password reset procedures) - -What will not be installed? -^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -- team-settings page -- SSO Capabilities - -Additionally, if you opt to do the 'No AWS' route, you will not get: - -- notifications over native push notifications via `FCM `__/`APNS `__ - -Prerequisites -------------- - -You need to have access to a Kubernetes cluster running a Kubernetes version , and the ``helm`` local binary on your PATH. -Your Kubernetes cluster needs to have internal DNS services, so that wire-server can find it's databases. -You need to have docker on the machine you are using to perform this installation with, or a secure data path to a machine that runs docker. You will be using docker to generate security credentials for your wire installation. - -* If you want calling services, you need to have - - * FIXME - -* If you don't have a Kubernetes cluster, you have two options: - - * You can get access to a managed Kubernetes cluster with the cloud provider of your choice. - * You can install one if you have ssh access to a set of sufficiently large virtual machines, see :ref:`ansible-kubernetes` - -* If you don't have ``helm`` yet, see `Installing helm `__. If you followed the instructions in :ref:`dependencies` should have helm installed already. - - -Type ``helm version``, you should, if everything is configured correctly, see a result similar this: - -:: - - version.BuildInfo{Version:"v3.1.1", GitCommit:"afe70585407b420d0097d07b21c47dc511525ac8", GitTreeState:"clean", GoVersion:"go1.13.8"} - -In case ``kubectl version`` shows both Client and Server versions, but ``helm version`` does not show a Server version, you may need to run ``helm init``. The exact version matters less as long as both Client and Server versions match (or are very close). - - -Preparing to install charts from the internet with Helm -------------------------------------------------------- -If your environment is online, you need to add the remote wire Helm repository, to download wire charts. - -To enable the wire charts helm repository: - -.. code:: shell - - helm repo add wire https://s3-eu-west-1.amazonaws.com/public.wire.com/charts - -(You can see available helm charts by running ``helm search repo wire/``. To see -new versions as time passes, you may need to run ``helm repo update``) - -Great! Now you can start installing. - -There is a shell script for doing a version of the following procedure with Helm 22. For reference, examine `prod-setup.sh `__. - -Watching changes as they happen -------------------------------- - -Open a terminal and run: - -.. code:: shell - - kubectl get pods -w - -This will block your terminal and show some things happening as you proceed through this guide. Keep this terminal open and open a second terminal. - -General installation notes --------------------------- - -.. note:: - - All helm and kubectl commands below can also take an extra ``--namespace `` if you don't want to install into the default Kubernetes namespace. - -How to install charts that provide access to external databases ---------------------------------------------------------------- - -Before you can deploy the helm charts that tell wire where external services -are, you need the 'values' and 'secrets' files for those charts to be -configured. Values and secrets YAML files provide helm charts with the settings -that are installed in Kubernetes. - -Assuming you have followed the procedures in the previous document, the values -and secrets files for cassandra, elasticsearch, and minio (if you are using it) -will have been filled in automatically. If not, examine the -``prod-values.example.yaml`` files for each of these services in -values//, copy them to ``values.yaml``, and then edit them. - -Once the values and secrets files for your databases have been configured, you -have to write a ``values/databases-ephemeral/values.yaml`` file to tell -databases-ephemeral what external database services you are using, and what -services you want databases-ephemeral to configure. We recommend you use the -'redis' component from this only, as the contents of redis are in fact -ephemeral. Look at the ``values/databases-ephemeral/prod-values.example.yaml`` -file - -Once you have values and secrets for your environment, open a terminal and run: - -.. code:: shell - - helm upgrade --install cassandra-external wire/cassandra-external -f values/cassandra-external/values.yaml --wait - helm upgrade --install elasticsearch-external wire/elasticsearch-external -f values/elasticsearch-external/values.yaml --wait - helm upgrade --install databases-ephemeral wire/databases-ephemeral -f values/databases-ephemeral/values.yaml --wait - -If you are using minio instead of AWS S3, you should also run: - -.. code:: shell - - helm upgrade --install minio-external wire/minio-external -f values/minio-external/values.yaml --wait - -How to install fake AWS services for SNS / SQS ----------------------------------------------- - -AWS SNS is required to send notifications to clients. SQS is used to get notified of any devices that have discontinued using Wire (e.g. if you uninstall the app, the push notification token is removed, and the wire-server will get feedback for that using SQS). - -Note: *for using real SQS for real native push notifications instead, see also :ref:`pushsns`.* - -If you use the fake-aws version, clients will use the websocket method to receive notifications, which keeps connections to the servers open, draining battery. - -Open a terminal and run: - -.. code:: shell - - cp values/fake-aws/prod-values.example.yaml values/fake-aws/values.yaml - helm upgrade --install fake-aws wire/fake-aws -f values/fake-aws/values.yaml --wait - -You should see some pods being created in your first terminal as the above command completes. - - -Preparing to install wire-server --------------------------------- -As part of configuring wire-server, we need to change some values, and provide some secrets. We're going to copy the files for this to a new folder, so that you always have the originals for reference. - -.. note:: - - This part of the process makes use of overrides for helm charts. You may wish to read :ref:`understand-helm-overrides` first. - - -.. code:: shell - - mkdir -p my-wire-server - cp values/wire-server/prod-secrets.example.yaml my-wire-server/secrets.yaml - cp values/wire-server/prod-values.example.yaml my-wire-server/values.yaml - - -How to configure real SMTP (email) services -------------------------------------------- -In order for users to interact with their wire account, they need to receive mail from your wire server. - -If you are using a mail server, you will need to provide your authentication credentials before setting up wire. - -- Add your SMTP username in my-wire-server/values.yaml under ``brig.config.smtp.username``. You may need to add an entry for username. -- Add your SMTP password is my-wire-server/secrets.yaml under ``brig.secrets.smtpPassword``. - - -How to install fake SMTP (email) services ------------------------------------------ -If you are not making use of mail services, and are adding your users via some other means, you can use demo-smtp, as a placeholder. - -.. code:: shell - - cp values/demo-smtp/prod-values.example.yaml values/demo-smtp/values.yaml - helm upgrade --install smtp wire/demo-smtp -f values/demo-smtp/values.yaml - - -You should see some pods being created in your first terminal as the above command completes. - -How to install wire-server itself ---------------------------------- - -Open ``my-wire-server/values.yaml`` and replace ``example.com`` and other domains and subdomains with domains of your choosing. Look for the ``# change this`` comments. You can try using ``sed -i 's/example.com//g' values.yaml``. - -1. If you are not using team settings, comment out ``teamSettings`` under ``brig.config.externalURLs``. - - -Generate some secrets: - -.. code:: shell - - openssl rand -base64 64 | env LC_CTYPE=C tr -dc a-zA-Z0-9 | head -c 42 > my-wire-server/restund.txt - apt install docker-ce - sudo docker run --rm quay.io/wire/alpine-intermediate /dist/zauth -m gen-keypair -i 1 > my-wire-server/zauth.txt - -1. Add the generated secret from my-wire-server/restund.txt to my-wire-serwer/secrets.yaml under ``brig.secrets.turn.secret`` -2. add **both** the public and private parts from zauth.txt to secrets.yaml under ``brig.secrets.zAuth`` -3. Add the public key from zauth.txt to secrets.yaml under ``nginz.secrets.zAuth.publicKeys`` - -Great, now try the installation: - -.. code:: shell - - helm upgrade --install wire-server wire/wire-server -f my-wire-server/values.yaml -f my-wire-server/secrets.yaml --wait - -.. _helmdns: - -DNS records ------------ - -.. include:: includes/helm_dns-ingress-troubleshooting.inc.rst diff --git a/docs/src/how-to/install/helm.md b/docs/src/how-to/install/helm.md new file mode 100644 index 0000000000..75ce93eda2 --- /dev/null +++ b/docs/src/how-to/install/helm.md @@ -0,0 +1,145 @@ +(helm)= + +# Installing wire-server (demo) components using helm + +## Introduction + +The following will install a demo version of all the wire-server components including the databases. This setup is not recommended in production but will get you started. + +Demo version means + +- easy setup - only one single machine with kubernetes is needed (make sure you have at least 4 CPU cores and 8 GB of memory available) +- no data persistence (everything stored in memory, will be lost) + +### What will be installed? + +- wire-server (API) + \- user accounts, authentication, conversations + \- assets handling (images, files, ...) + \- notifications over websocket +- wire-webapp, a fully functioning web client (like `https://app.wire.com`) +- wire-account-pages, user account management (a few pages relating to e.g. password reset) + +### What will not be installed? + +- notifications over native push notifications via [FCM](https://firebase.google.com/docs/cloud-messaging/)/[APNS](https://developer.apple.com/notifications/) +- audio/video calling servers using {ref}`understand-restund`) +- team-settings page + +## Prerequisites + +You need to have access to a kubernetes cluster, and the `helm` local binary on your PATH. + +- If you don't have a kubernetes cluster, you have two options: + + - You can get access to a managed kubernetes cluster with the cloud provider of your choice. + - You can install one if you have ssh access to a virtual machine, see {ref}`ansible-kubernetes` + +- If you don't have `helm` yet, see [Installing helm](https://helm.sh/docs/using_helm/#installing-helm). + +Type `helm version`, you should, if everything is configured correctly, see a result like this: + +``` +version.BuildInfo{Version:"v3.1.1", GitCommit:"afe70585407b420d0097d07b21c47dc511525ac8", GitTreeState:"clean", GoVersion:"go1.13.8"} +``` + +In case `kubectl version` shows both Client and Server versions, but `helm version` does not show a Server version, you may need to run `helm init`. The exact version (assuming `v2.X.X` - at the time of writing v3 is not yet supported) matters less as long as both Client and Server versions match (or are very close). + +## How to start installing charts from wire + +Enable the wire charts helm repository: + +```shell +helm repo add wire https://s3-eu-west-1.amazonaws.com/public.wire.com/charts +``` + +(You can see available helm charts by running `helm search repo wire/`. To see +new versions as time passes, you may need to run `helm repo update`) + +Great! Now you can start installing. + +```{note} +all commands below can also take an extra `--namespace ` if you don't want to install into the default kubernetes namespace. +``` + +## Watching changes as they happen + +Open a terminal and run + +```shell +kubectl get pods -w +``` + +This will block your terminal and show some things happening as you proceed through this guide. Keep this terminal open and open a second terminal. + +## How to install in-memory databases and external components + +In your second terminal, first install databases: + +```shell +helm upgrade --install databases-ephemeral wire/databases-ephemeral --wait +``` + +You should see some pods being created in your first terminal as the above command completes. + +You can do the following two steps (mock aws services and demo smtp +server) in parallel with the above in two more terminals, or +sequentially after database-ephemeral installation has succeeded. + +```shell +helm upgrade --install fake-aws wire/fake-aws --wait +helm upgrade --install smtp wire/demo-smtp --wait +``` + +## How to install wire-server itself + +```{note} +The following makes use of overrides for helm charts. You may wish to read {ref}`understand-helm-overrides` first. +``` + +Change back to the wire-server-deploy directory. Copy example demo values and secrets: + +```shell +mkdir -p wire-server && cd wire-server +cp ../values/wire-server/demo-secrets.example.yaml secrets.yaml +cp ../values/wire-server/demo-values.example.yaml values.yaml +``` + +Or, if you are not in wire-server-deploy, download example demo values and secrets: + +```shell +mkdir -p wire-server && cd wire-server +curl -sSL https://raw.githubusercontent.com/wireapp/wire-server-deploy/master/values/wire-server/demo-secrets.example.yaml > secrets.yaml +curl -sSL https://raw.githubusercontent.com/wireapp/wire-server-deploy/master/values/wire-server/demo-values.example.yaml > values.yaml +``` + +Open `values.yaml` and replace `example.com` and other domains and subdomains with domains of your choosing. Look for the `# change this` comments. You can try using `sed -i 's/example.com//g' values.yaml`. + +Generate some secrets (if you are using the docker image from {ref}`ansible-kubernetes`, you should open a shell on the host system for this): + +```shell +openssl rand -base64 64 | env LC_CTYPE=C tr -dc a-zA-Z0-9 | head -c 42 > restund.txt +docker run --rm quay.io/wire/alpine-intermediate /dist/zauth -m gen-keypair -i 1 > zauth.txt +``` + +1. Add the generated secret from restund.txt to secrets.yaml under `brig.secrets.turn.secret` +2. add **both** the public and private parts from zauth.txt to secrets.yaml under `brig.secrets.zAuth` +3. Add the public key from zauth.txt **also** to secrets.yaml under `nginz.secrets.zAuth.publicKeys` + +You can do this with an editor, or using sed: + +```shell +sed -i 's/secret:$/secret: content_of_restund.txt_file/' secrets.yaml +sed -i 's/publicKeys: ""/publicKeys: "public_key_from_zauth.txt_file"/' secrets.yaml +sed -i 's/privateKeys: ""/privateKeys: "private_key_from_zauth.txt_file"/' secrets.yaml +``` + +Great, now try the installation: + +```shell +helm upgrade --install wire-server wire/wire-server -f values.yaml -f secrets.yaml --wait +``` + +```{eval-rst} +.. include:: includes/helm_dns-ingress-troubleshooting.inc.rst +``` diff --git a/docs/src/how-to/install/helm.rst b/docs/src/how-to/install/helm.rst deleted file mode 100644 index 695a4c95a3..0000000000 --- a/docs/src/how-to/install/helm.rst +++ /dev/null @@ -1,154 +0,0 @@ -.. _helm: - -Installing wire-server (demo) components using helm -====================================================== - -Introduction ------------------ - -The following will install a demo version of all the wire-server components including the databases. This setup is not recommended in production but will get you started. - -Demo version means - -* easy setup - only one single machine with kubernetes is needed (make sure you have at least 4 CPU cores and 8 GB of memory available) -* no data persistence (everything stored in memory, will be lost) - -What will be installed? -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -- wire-server (API) - - user accounts, authentication, conversations - - assets handling (images, files, ...) - - notifications over websocket - -- wire-webapp, a fully functioning web client (like ``https://app.wire.com``) -- wire-account-pages, user account management (a few pages relating to e.g. password reset) - -What will not be installed? -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -- notifications over native push notifications via `FCM `__/`APNS `__ -- audio/video calling servers using :ref:`understand-restund`) -- team-settings page - -Prerequisites --------------------------------- - -You need to have access to a kubernetes cluster, and the ``helm`` local binary on your PATH. - -* If you don't have a kubernetes cluster, you have two options: - - * You can get access to a managed kubernetes cluster with the cloud provider of your choice. - * You can install one if you have ssh access to a virtual machine, see :ref:`ansible-kubernetes` - -* If you don't have ``helm`` yet, see `Installing helm `__. - -Type ``helm version``, you should, if everything is configured correctly, see a result like this: - -:: - - version.BuildInfo{Version:"v3.1.1", GitCommit:"afe70585407b420d0097d07b21c47dc511525ac8", GitTreeState:"clean", GoVersion:"go1.13.8"} - - -In case ``kubectl version`` shows both Client and Server versions, but ``helm version`` does not show a Server version, you may need to run ``helm init``. The exact version (assuming `v2.X.X` - at the time of writing v3 is not yet supported) matters less as long as both Client and Server versions match (or are very close). - -How to start installing charts from wire --------------------------------------------------- - -Enable the wire charts helm repository: - -.. code:: shell - - helm repo add wire https://s3-eu-west-1.amazonaws.com/public.wire.com/charts - -(You can see available helm charts by running ``helm search repo wire/``. To see -new versions as time passes, you may need to run ``helm repo update``) - -Great! Now you can start installing. - -.. note:: - - all commands below can also take an extra ``--namespace `` if you don't want to install into the default kubernetes namespace. - -Watching changes as they happen --------------------------------------------------- - -Open a terminal and run - -.. code:: shell - - kubectl get pods -w - -This will block your terminal and show some things happening as you proceed through this guide. Keep this terminal open and open a second terminal. - -How to install in-memory databases and external components --------------------------------------------------------------- - -In your second terminal, first install databases: - -.. code:: shell - - helm upgrade --install databases-ephemeral wire/databases-ephemeral --wait - -You should see some pods being created in your first terminal as the above command completes. - -You can do the following two steps (mock aws services and demo smtp -server) in parallel with the above in two more terminals, or -sequentially after database-ephemeral installation has succeeded. - -.. code:: shell - - helm upgrade --install fake-aws wire/fake-aws --wait - helm upgrade --install smtp wire/demo-smtp --wait - -How to install wire-server itself ---------------------------------------- - -.. note:: - - The following makes use of overrides for helm charts. You may wish to read :ref:`understand-helm-overrides` first. - -Change back to the wire-server-deploy directory. Copy example demo values and secrets: - -.. code:: shell - - mkdir -p wire-server && cd wire-server - cp ../values/wire-server/demo-secrets.example.yaml secrets.yaml - cp ../values/wire-server/demo-values.example.yaml values.yaml - -Or, if you are not in wire-server-deploy, download example demo values and secrets: - -.. code:: shell - - mkdir -p wire-server && cd wire-server - curl -sSL https://raw.githubusercontent.com/wireapp/wire-server-deploy/master/values/wire-server/demo-secrets.example.yaml > secrets.yaml - curl -sSL https://raw.githubusercontent.com/wireapp/wire-server-deploy/master/values/wire-server/demo-values.example.yaml > values.yaml - -Open ``values.yaml`` and replace ``example.com`` and other domains and subdomains with domains of your choosing. Look for the ``# change this`` comments. You can try using ``sed -i 's/example.com//g' values.yaml``. - -Generate some secrets (if you are using the docker image from :ref:`ansible-kubernetes`, you should open a shell on the host system for this): - -.. code:: shell - - openssl rand -base64 64 | env LC_CTYPE=C tr -dc a-zA-Z0-9 | head -c 42 > restund.txt - docker run --rm quay.io/wire/alpine-intermediate /dist/zauth -m gen-keypair -i 1 > zauth.txt - -1. Add the generated secret from restund.txt to secrets.yaml under ``brig.secrets.turn.secret`` -2. add **both** the public and private parts from zauth.txt to secrets.yaml under ``brig.secrets.zAuth`` -3. Add the public key from zauth.txt **also** to secrets.yaml under ``nginz.secrets.zAuth.publicKeys`` - -You can do this with an editor, or using sed: - -.. code:: shell - - sed -i 's/secret:$/secret: content_of_restund.txt_file/' secrets.yaml - sed -i 's/publicKeys: ""/publicKeys: "public_key_from_zauth.txt_file"/' secrets.yaml - sed -i 's/privateKeys: ""/privateKeys: "private_key_from_zauth.txt_file"/' secrets.yaml - -Great, now try the installation: - -.. code:: shell - - helm upgrade --install wire-server wire/wire-server -f values.yaml -f secrets.yaml --wait - -.. include:: includes/helm_dns-ingress-troubleshooting.inc.rst diff --git a/docs/src/how-to/install/includes/dns-federation.rst b/docs/src/how-to/install/includes/dns-federation.rst deleted file mode 100644 index c25184ffbe..0000000000 --- a/docs/src/how-to/install/includes/dns-federation.rst +++ /dev/null @@ -1,43 +0,0 @@ -DNS setup for federation ------------------------- - -SRV record -^^^^^^^^^^ - -One prerequisite to enable federation is an `SRV record `__ as defined in `RFC -2782 `__ that needs to be set up to allow the wire-server to be -discovered by other Wire backends. See the documentation on :ref:`discovery in federation` for more -information on the role of discovery in federation. - -The fields of the SRV record need to be populated as follows - -* ``service``: ``wire-server-federator`` -* ``proto``: ``tcp`` -* ``name``: -* ``TTL``: e.g. 600 (10 minutes) in an initial phase. This can be set to a higher value (e.g. 86400) if your systems are stable and DNS records don't change a lot. -* ``priority``: anything. A good default value would be 0 -* ``weight``: >0 for your server to be reachable. A good default value could be 10 -* ``port``: ``443`` -* ``target``: - -To give an example, assuming - -* your federation :ref:`Backend Domain ` is ``example.com`` -* your domains for other services already set up follow the convention ``.wire.example.org`` - -then your federation :ref:`Infra Domain ` would be ``federator.wire.example.org``. - -The SRV record would look as follows: - -.. code-block:: bash - - # _service._proto.name. ttl IN SRV priority weight port target. - _wire-server-federator._tcp.example.com. 600 IN SRV 0 10 443 federator.wire.example.org. - -DNS A record for the federator -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Background: ``federator`` is the server component responsible for incoming and outgoing requests to other backend; but it is proxied on -the incoming requests by the ingress component on kubernetes as shown in :ref:`Federation Architecture` - -As mentioned in :ref:`DNS setup for Helm`, you also need a ``federator.`` record, which, alongside your other DNS records that point to the ingress component, also needs to point to the IP of your ingress, i.e. the IP you want to provide services on. diff --git a/docs/src/how-to/install/includes/helm_dns-ingress-troubleshooting.inc.rst b/docs/src/how-to/install/includes/helm_dns-ingress-troubleshooting.inc.rst index 90b9e1f3b5..610ca8c784 100644 --- a/docs/src/how-to/install/includes/helm_dns-ingress-troubleshooting.inc.rst +++ b/docs/src/how-to/install/includes/helm_dns-ingress-troubleshooting.inc.rst @@ -143,8 +143,6 @@ Next, we want to redirect port 443 to the port the nginx https ingress nodeport * Option 2: Use ansible to do that, run the `iptables playbook `__ -.. include:: ./includes/dns-federation.rst - Trying things out ----------------- diff --git a/docs/src/how-to/install/index.md b/docs/src/how-to/install/index.md new file mode 100644 index 0000000000..2758ad819a --- /dev/null +++ b/docs/src/how-to/install/index.md @@ -0,0 +1,30 @@ +# Installation + +```{toctree} +:glob: true +:maxdepth: 2 + +How to plan an installation +Version requirements +dependencies +(demo) How to install kubernetes +(demo) How to install wire-server using Helm +(production) Introduction +(production) How to install kubernetes and databases +(production) How to configure AWS services +(production) How to install wire-server using Helm +(production) How to monitor wire-server +(production) How to see centralized logs for wire-server +Server and team feature settings +Messaging Layer Security (MLS) +Web app settings +sft +restund +configure-federation +tls +How to install and set up Legal Hold +Managing authentication with ansible +Using tinc +Troubleshooting during installation +Verifying your installation +``` diff --git a/docs/src/how-to/install/kubernetes.md b/docs/src/how-to/install/kubernetes.md new file mode 100644 index 0000000000..1c4430eefd --- /dev/null +++ b/docs/src/how-to/install/kubernetes.md @@ -0,0 +1,85 @@ +(ansible-kubernetes)= + +# Installing kubernetes for a demo installation (on a single virtual machine) + +## How to set up your hosts.ini file + +Assuming a single virtual machine with a public IP address running Ubuntu 18.04, with at least 5 CPU cores and at least 8 GB of memory. + +Move to `wire-server-deploy/ansible`: + +```shell +cd ansible/ +``` + +Then: + +```{eval-rst} +.. include:: includes/ansible-authentication-blob.rst +``` + +## Passwordless authentication + +Presuming a fresh default Ubuntu 18.04 installation, the following steps will enable the Ansible playbook to run without specifying passwords. + +This presumes you named your default Ubuntu user "wire", and X.X.X.X is the IP or domain name of the target server Ansible will install Kubernetes on. + +On the client (from `wire-server-deploy/ansible`), run: + +```shell +ssh-keygen -f /root/.ssh/id_rsa -t rsa -P +ssh-copy-id wire@X.X.X.X +sed -i 's/# ansible_user = .../ansible_user = wire/g' inventory/demo/hosts.ini +``` + +And on the server (X.X.X.X), run: + +```shell +echo 'wire ALL=(ALL) NOPASSWD:ALL' | sudo tee -a /etc/sudoers +``` + +Then on the client: + +```shell +cp inventory/demo/hosts.example.ini inventory/demo/hosts.ini +``` + +Open hosts.ini and replace `X.X.X.X` with the IP address of your virtual machine that you use for ssh access. You can try using: + +```shell +sed -i 's/X.X.X.X/1.2.3.4/g' inventory/demo/hosts.ini +``` + +## Minio setup + +In the `inventory/demo/hosts.ini` file, edit the minio variables in `[minio:vars]` (`prefix`, `domain` and `deeplink_title`) +by replacing `example.com` with your own domain. + +## How to install kubernetes + +From `wire-server-deploy/ansible`: + +``` +ansible-playbook -i inventory/demo/hosts.ini kubernetes.yml -vv +``` + +When the playbook finishes correctly (which can take up to 20 minutes), you should have a folder `artifacts` containing a file `admin.conf`. Copy this file: + +``` +mkdir -p ~/.kube +cp artifacts/admin.conf ~/.kube/config +KUBECONFIG=~/.kube/config +``` + +Make sure you can reach the server: + +``` +kubectl version +``` + +should give output similar to this: + +``` +Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:23:09Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} +Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:14:56Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} +``` diff --git a/docs/src/how-to/install/kubernetes.rst b/docs/src/how-to/install/kubernetes.rst deleted file mode 100644 index d4e423dfa4..0000000000 --- a/docs/src/how-to/install/kubernetes.rst +++ /dev/null @@ -1,83 +0,0 @@ -.. _ansible-kubernetes: - -Installing kubernetes for a demo installation (on a single virtual machine) -============================================================================ - - -How to set up your hosts.ini file -------------------------------------- - -Assuming a single virtual machine with a public IP address running Ubuntu 18.04, with at least 5 CPU cores and at least 8 GB of memory. - -Move to ``wire-server-deploy/ansible``: - -.. code:: shell - - cd ansible/ - -Then: - -.. include:: includes/ansible-authentication-blob.rst - -Passwordless authentication ---------------------------- - -Presuming a fresh default Ubuntu 18.04 installation, the following steps will enable the Ansible playbook to run without specifying passwords. - -This presumes you named your default Ubuntu user "wire", and X.X.X.X is the IP or domain name of the target server Ansible will install Kubernetes on. - -On the client (from ``wire-server-deploy/ansible``), run: - -.. code:: shell - - ssh-keygen -f /root/.ssh/id_rsa -t rsa -P - ssh-copy-id wire@X.X.X.X - sed -i 's/# ansible_user = .../ansible_user = wire/g' inventory/demo/hosts.ini - -And on the server (X.X.X.X), run: - -.. code:: shell - - echo 'wire ALL=(ALL) NOPASSWD:ALL' | sudo tee -a /etc/sudoers - -Then on the client: - -.. code:: shell - - cp inventory/demo/hosts.example.ini inventory/demo/hosts.ini - -Open hosts.ini and replace `X.X.X.X` with the IP address of your virtual machine that you use for ssh access. You can try using: - -.. code:: shell - - sed -i 's/X.X.X.X/1.2.3.4/g' inventory/demo/hosts.ini - -Minio setup ------------ - -In the ``inventory/demo/hosts.ini`` file, edit the minio variables in ``[minio:vars]`` (``prefix``, ``domain`` and ``deeplink_title``) -by replacing ``example.com`` with your own domain. - -How to install kubernetes --------------------------- - -From ``wire-server-deploy/ansible``:: - - ansible-playbook -i inventory/demo/hosts.ini kubernetes.yml -vv - -When the playbook finishes correctly (which can take up to 20 minutes), you should have a folder ``artifacts`` containing a file ``admin.conf``. Copy this file:: - - mkdir -p ~/.kube - cp artifacts/admin.conf ~/.kube/config - KUBECONFIG=~/.kube/config - -Make sure you can reach the server:: - - kubectl version - -should give output similar to this:: - - Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:23:09Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} - Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:14:56Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} - - diff --git a/docs/src/how-to/install/logging.rst b/docs/src/how-to/install/logging.md similarity index 60% rename from docs/src/how-to/install/logging.rst rename to docs/src/how-to/install/logging.md index 5d9368c83c..ca4ea9341d 100644 --- a/docs/src/how-to/install/logging.rst +++ b/docs/src/how-to/install/logging.md @@ -1,182 +1,164 @@ -.. _logging: +(logging)= -Installing centralized logging dashboards using Kibana -======================================================== +# Installing centralized logging dashboards using Kibana -Introduction ------------- +## Introduction This page shows you how to install Elasticsearch, Kibana, and fluent-bit to aggregate and visualize the logs from wire-server components. -Status -------- +## Status Logging support is in active development as of September 2019, some logs may not be visible yet, and certain parts are not fully automated yet. -Prerequisites -------------- +## Prerequisites You need to have wire-server installed, see either of -* :ref:`helm` -* :ref:`helm_prod`. +- {ref}`helm` +- {ref}`helm-prod`. +## Installing required helm charts -Installing required helm charts --------------------------------- - - -Deploying Elasticsearch -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +### Deploying Elasticsearch Elasticsearch indexes the logs and makes them searchable. The following elasticsearch-ephemeral chart makes use of the disk space the pod happens to run on. -:: - - $ helm install --namespace wire/elasticsearch-ephemeral +``` +$ helm install --namespace wire/elasticsearch-ephemeral +``` Note that since we are not specifying a release name during helm install, it generates a 'verb-noun' pair, and uses it. Elasticsearch's chart does not use the release name of the helm chart in the pod name, sadly. -Deploying Elasticsearch-Curator -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +### Deploying Elasticsearch-Curator Elasticsearch-curator trims the logs that are contained in elasticsearch, so that your elasticsearch pod does not get too large, crash, and need to be re-built. -:: - - $ helm install --namespace wire/elasticsearch-curator +``` +$ helm install --namespace wire/elasticsearch-curator +``` Note that since we are not specifying a release name during helm install, it generates a 'verb-noun' pair, and uses it. If you look at your pod names, you can see this name prepended to your pods in 'kubectl -n get pods'. -Deploying Kibana -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -:: +### Deploying Kibana - $ helm install --namespace wire/kibana +``` +$ helm install --namespace wire/kibana +``` Note that since we are not specifying a release name during helm install, it generates a 'verb-noun' pair, and uses it. If you look at your pod names, you can see this name prepended to your pods in 'kubectl -n get pods'. -Deploying fluent-bit -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -:: +### Deploying fluent-bit - $ helm install --namespace wire/fluent-bit +``` +$ helm install --namespace wire/fluent-bit +``` -Configuring fluent-bit ----------------------- +## Configuring fluent-bit -.. note:: +```{note} +The following makes use of overrides for helm charts. You may wish to read {ref}`understand-helm-overrides` first. +``` - The following makes use of overrides for helm charts. You may wish to read :ref:`understand-helm-overrides` first. - -Per pod-template, you can specify what parsers ``fluent-bit`` needs to +Per pod-template, you can specify what parsers `fluent-bit` needs to use to interpret the pod's logs in a structured way. By default, it just parses them as plain text. But, you can change this using a pod annotation. E.g.: -:: - - apiVersion: v1 - kind: Pod - metadata: - name: brig - labels: - app: brig - annotations: - fluentbit.io/parser: json - spec: - containers: - - name: apache - image: edsiper/apache_logs - -You can also define your own custom parsers in our ``fluent-bit`` -chart's ``values.yml``. For example, we have one defined for ``nginz``. +``` +apiVersion: v1 +kind: Pod +metadata: + name: brig + labels: + app: brig + annotations: + fluentbit.io/parser: json +spec: + containers: + - name: apache + image: edsiper/apache_logs +``` + +You can also define your own custom parsers in our `fluent-bit` +chart's `values.yml`. For example, we have one defined for `nginz`. For more info, see : -https://github.com/fluent/fluent-bit-docs/blob/master/filter/kubernetes.md#kubernetes-annotations + Alternately, if there is already fluent-bit deployed in your environment, get the helm name for the deployment (verb-noun prepended to the pod name), and -:: - - $ helm upgrade --namespace wire/fluent-bit +``` +$ helm upgrade --namespace wire/fluent-bit +``` Note that since we are not specifying a release name during helm install, it generates a 'verb-noun' pair, and uses it. if you look at your pod names, you can see this name prepended to your pods in 'kubectl -n get pods'. -.. _post-install-kibana-setup: +(post-install-kibana-setup)= -Post-install kibana setup --------------------------- +## Post-install kibana setup Get the pod name for your kibana instance (not the one set up with fluent-bit), and -:: - - $ kubectl -n port-forward 5601:5601 +``` +$ kubectl -n port-forward 5601:5601 +``` go to 127.0.0.1:5601 in your web browser. 1. Click on 'discover'. -2. Use ``kubernetes_cluster-*`` as the Index pattern. +2. Use `kubernetes_cluster-*` as the Index pattern. 3. Click on 'Next step' 4. Click on the 'Time Filter field name' dropdown, and select - '@timestamp'. + '. 5. Click on 'create index patern'. - -Usage after installation -------------------------- +## Usage after installation Get the pod name for your kibana instance (not the one set up with fluent-bit), and -:: - - $ kubectl -n port-forward 5601:5601 +``` +$ kubectl -n port-forward 5601:5601 +``` Go to 127.0.0.1:5601 in your web browser. Click on 'discover' to view data. -.. _nuking-it-all: +(nuking-it-all)= -Nuking it all. --------------- +## Nuking it all. -Find the names of the helm releases for your pods (look at ``helm ls --all`` -and ``kubectl -n get pods`` , and run -``helm del --purge`` for each of them. +Find the names of the helm releases for your pods (look at `helm ls --all` +and `kubectl -n get pods` , and run +`helm del --purge` for each of them. Note: Elasticsearch does not use the name of the helm chart, and therefore is harder to identify. -Debugging ---------- - -:: +## Debugging - kubectl -n logs +``` +kubectl -n logs +``` -How this was developed -^^^^^^^^^^^^^^^^^^^^^^^^ +### How this was developed First, we deployed elasticsearch with the elasticsearch-ephemeral chart, then kibana. then we deployed fluent-bit, which set up a kibana of it's diff --git a/docs/src/how-to/install/monitoring.rst b/docs/src/how-to/install/monitoring.md similarity index 58% rename from docs/src/how-to/install/monitoring.rst rename to docs/src/how-to/install/monitoring.md index ea900526cc..18f5a8865b 100644 --- a/docs/src/how-to/install/monitoring.rst +++ b/docs/src/how-to/install/monitoring.md @@ -1,21 +1,19 @@ -.. _monitoring: +(monitoring)= -Monitoring wire-server using Prometheus and Grafana -======================================================= +# Monitoring wire-server using Prometheus and Grafana All wire-server helm charts offering prometheus metrics expose a `metrics.serviceMonitor.enabled` option. If these are set to true, the helm charts will install `ServiceMonitor` resources, which can be used to mark services for scraping by -[Prometheus Operator](https://prometheus-operator.dev/), -[Grafana Agent Operator](https://grafana.com/docs/grafana-cloud/kubernetes-monitoring/agent-k8s/), +\[Prometheus Operator\](), +\[Grafana Agent Operator\](), or similar prometheus-compatible tools. Refer to their documentation for installation. -Dashboards ------------------ +## Dashboards -Grafana dashboard configurations are included as JSON inside the ``dashboards`` +Grafana dashboard configurations are included as JSON inside the `dashboards` directory. You may import these via Grafana's web UI. diff --git a/docs/src/how-to/install/planning.rst b/docs/src/how-to/install/planning.md similarity index 55% rename from docs/src/how-to/install/planning.rst rename to docs/src/how-to/install/planning.md index 29e84f97a6..1c3b1a5f44 100644 --- a/docs/src/how-to/install/planning.rst +++ b/docs/src/how-to/install/planning.md @@ -1,10 +1,8 @@ -Implementation plan -==================================== +# Implementation plan There are two types of implementation: demo and production. -Demo installation (trying functionality out) ------------------------------------------------ +## Demo installation (trying functionality out) Please note that there is no way to migrate data from a demo installation to a production installation - it is really meant as a way @@ -14,36 +12,36 @@ Please note your data will be in-memory only and may disappear at any given mome What you need: -- a way to create **DNS records** for your domain name (e.g. - ``wire.example.com``) -- a way to create **SSL/TLS certificates** for your domain name (to allow - connecting via ``https://``) -- Either one of the following: +- a way to create **DNS records** for your domain name (e.g. + `wire.example.com`) - - A kubernetes cluster (some cloud providers offer a managed - kubernetes cluster these days). - - One single virtual machine running ubuntu 18.04 with at least 20 GB of disk, 8 GB of memory, and 8 CPU cores. +- a way to create **SSL/TLS certificates** for your domain name (to allow + connecting via `https://`) -A demo installation will look a bit like this: +- Either one of the following: + + - A kubernetes cluster (some cloud providers offer a managed + kubernetes cluster these days). + - One single virtual machine running ubuntu 18.04 with at least 20 GB of disk, 8 GB of memory, and 8 CPU cores. -.. figure:: img/architecture-demo.png +A demo installation will look a bit like this: - Demo installation (1 VM) +```{figure} img/architecture-demo.png +Demo installation (1 VM) +``` -Next steps for demo installation -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +### Next steps for demo installation -If you already have a kubernetes cluster, your next step will be :ref:`helm`, otherwise, your next step will be :ref:`ansible-kubernetes` +If you already have a kubernetes cluster, your next step will be {ref}`helm`, otherwise, your next step will be {ref}`ansible-kubernetes` -.. _planning_prod: +(planning-prod)= -Production installation (persistent data, high-availability) --------------------------------------------------------------- +## Production installation (persistent data, high-availability) What you need: -- a way to create **DNS records** for your domain name (e.g. ``wire.example.com``) -- a way to create **SSL/TLS certificates** for your domain name (to allow connecting via ``https://wire.example.com``) +- a way to create **DNS records** for your domain name (e.g. `wire.example.com`) +- a way to create **SSL/TLS certificates** for your domain name (to allow connecting via `https://wire.example.com`) - A **kubernetes cluster with at least 3 worker nodes and at least 3 etcd nodes** (some cloud providers offer a managed kubernetes cluster these days) - minimum **17 virtual machines** for components outside kubernetes (cassandra, minio, elasticsearch, redis, restund) @@ -51,13 +49,15 @@ A recommended installation of Wire-server in any regular data centre, configured with high-availability will require the following virtual servers: +```{eval-rst} .. include:: includes/vm-table.rst +``` A production installation will look a bit like this: -.. figure:: img/architecture-server-ha.png - - Production installation in High-Availability mode +```{figure} img/architecture-server-ha.png +Production installation in High-Availability mode +``` If you use a private datacenter (not a cloud provider), the easiest is to have three physical servers, each with one virtual machine for each @@ -71,7 +71,6 @@ Ensure that your VMs have IP addresses that do not change. Avoid `10.x.x.x` network address schemes, and instead use something like `192.168.x.x` or `172.x.x.x`. This is because internally, Kubernetes already uses a `10.x.x.x` address scheme, creating a potential conflict. -Next steps for high-available production installation -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +### Next steps for high-available production installation -Your next step will be :ref:`ansible_vms` +Your next step will be {ref}`ansible-vms` diff --git a/docs/src/how-to/install/post-install.md b/docs/src/how-to/install/post-install.md new file mode 100644 index 0000000000..6a513f0ece --- /dev/null +++ b/docs/src/how-to/install/post-install.md @@ -0,0 +1,132 @@ +# Verifying your installation + +After a successful installation of wire-server and its components, there are some useful checks to be run to ensure the proper functioning of the system. Here's a non-exhaustive list of checks to run on the hosts: + + +(ntp-check)= + +## NTP Checks + +Ensure that NTP is properly set up on all nodes. Particularly for Cassandra **DO NOT** use anything else other than ntp. Here are some helpful blogs that explain why: + +> - +> - + +### How can I see if NTP is correctly set up? + +This is an important part of your setup, particularly for your Cassandra nodes. You should use `ntpd` and our ansible scripts to ensure it is installed correctly - but you can still check it manually if you prefer. The following 2 sub-sections explain both approaches. + +#### I used your ansible scripts and prefer to have automated checks + +Then the easiest way is to use [this ansible playbook](https://github.com/wireapp/wire-server-deploy/blob/develop/ansible/cassandra-verify-ntp.yml) + +#### I am not using ansible and like to SSH into hosts and checking things manually + +The following shows how to check for existing servers connected to (assumes `ntpq` is installed) + +```sh +ntpq -pn +``` + +which should yield something like this: + +```sh + remote refid st t when poll reach delay offset jitter +============================================================================== + time.example. .POOL. 16 p - 64 0 0.000 0.000 0.000 ++ 2 u 498 512 377 0.759 0.039 0.081 +* 2 u 412 512 377 1.251 -0.670 0.063 +``` + +if your output shows \_ONLY\_ the entry with a `.POOL.` as `refid` and a lot of 0s, something is probably wrong, i.e.: + +```sh + remote refid st t when poll reach delay offset jitter +============================================================================== + time.example. .POOL. 16 p - 64 0 0.000 0.000 0.000 +``` + +What should you do if this is the case? Ensure that `ntp` is installed and that the servers in the pool (typically at `/etc/ntp.conf`) are reachable. + + +(logrotation-check)= + +## Logs and Data Protection checks + +On Wire.com, we keep logs for a maximum of 72 hours as described in the [privacy whitepaper](https://wire.com/en/security/) + +We recommend you do the same and limit the amount of logs kept on your servers. + +### How can I see how far in the past access logs are still available on my servers? + +Look at the timestamps of your earliest nginz logs: + +```sh +export NAMESPACE=default # this may be 'default' or 'wire' +kubectl -n "$NAMESPACE" get pods | grep nginz +# choose one of the resulting names, it might be named e.g. nginz-6d75755c5c-h9fwn +kubectl -n "$NAMESPACE" logs -c nginz | head -10 +``` + +If the timestamp is more than 3 days in the past, your logs are kept for unnecessary long amount of time and you should configure log rotation. + +### I used your ansible scripts and prefer to have the default 72 hour maximum log availability configured automatically. + +You can use [the kubernetes_logging.yml ansible playbook](https://github.com/wireapp/wire-server-deploy/blob/develop/ansible/kubernetes_logging.yml) + +#### I am not using ansible and like to SSH into hosts and configure things manually + +SSH into one of your kubernetes worker machines. + +If you installed as per the instructions on docs.wire.com, then the default logging strategy is `json-file` with `--log-opt max-size=50m --log-opt max-file=5` storing logs in files under `/var/lib/docker/containers//.log`. You can check this with these commands: + +```sh +docker info --format '{{.LoggingDriver}}' +ps aux | grep log-opt +``` + +(Options configured in `/etc/systemd/system/docker.service.d/docker-options.conf`) + +The default will thus keep your logs around until reaching 250 MB per pod, which is far longer than three days. Since docker logs don't allow a time-based log rotation, we can instead make use of [logrotate](https://linux.die.net/man/8/logrotate) to rotate logs for us. + +Create the file `/etc/logrotate.d/podlogs` with the following contents: + +% NOTE: in case you change these docs, also make sure to update the actual code +% under https://github.com/wireapp/wire-server-deploy/blob/develop/ansible/kubernetes_logging.yml + +``` +"/var/lib/docker/containers/*/*.log" +{ + daily + missingok + rotate 2 + maxage 1 + copytruncate + nocreate + nocompress + } +``` + +Repeat the same for all the other kubernetes worker machines, the file needs to exist on all of them. + +There should already be a cron job for logrotate for other parts of the system, so this should be sufficent, you can stop here. + +You can check for the cron job with: + +``` +ls /etc/cron.daily/logrotate +``` + +And you can manually run a log rotation using: + +``` +/usr/sbin/logrotate -v /etc/logrotate.conf +``` + +If you want to clear out old logs entirely now, you can force log rotation three times (again, on all kubernetes machines): + +``` +/usr/sbin/logrotate -v -f /etc/logrotate.conf +/usr/sbin/logrotate -v -f /etc/logrotate.conf +/usr/sbin/logrotate -v -f /etc/logrotate.conf +``` diff --git a/docs/src/how-to/install/prod-intro.md b/docs/src/how-to/install/prod-intro.md new file mode 100644 index 0000000000..a908c14c30 --- /dev/null +++ b/docs/src/how-to/install/prod-intro.md @@ -0,0 +1,58 @@ +# Introduction + +```{warning} +It is *strongly recommended* to have followed and completed the demo installation {ref}`helm` before continuing with this page. The demo installation is simpler, and already makes you aware of a few things you need (TLS certs, DNS, a VM, ...). +``` + +```{note} +All required dependencies for doing an installation can be found here {ref}`dependencies`. +``` + +A production installation consists of several parts: + +Part 1 - you're on your own here, and need to create a set of VMs as detailed in {ref}`planning-prod` + +Part 2 ({ref}`ansible-vms`) deals with installing components directly on a set of virtual machines, such as kubernetes itself, as well as databases. It makes use of ansible to achieve that. + +Part 3 ({ref}`helm-prod`) is similar to the demo installation, and uses the tool `helm` to install software on top of kubernetes. + +Part 4 ({ref}`configuration-options`) details other possible configuration options and settings to fit your needs. + +## What will be installed by following these parts? + +- highly-available and persistent databases (cassandra, elasticsearch) + +- kubernetes + +- restund (audio/video calling) servers ( see also {ref}`understand-restund`) + +- wire-server (API) + \- user accounts, authentication, conversations + \- assets handling (images, files, ...) + \- notifications over websocket + \- single-sign-on with SAML + +- wire-webapp + + - fully functioning web client (like `https://app.wire.com`) + +- wire-account-pages + + - user account management (a few pages relating to e.g. password reset) + +## What will not be installed? + +- notifications over native push notification via [FCM](https://firebase.google.com/docs/cloud-messaging/)/[APNS](https://developer.apple.com/notifications/) + +## What will not be installed by default? + +- 3rd party proxying - requires accounts with third-party providers +- team-settings page for team management (including invitations, requires access to a private repository - get in touch with us for access) + +## Getting support + +[Get in touch](https://wire.com/pricing/). + +## Next steps for high-available production installation + +Your next step will be part 2, {ref}`ansible-vms` diff --git a/docs/src/how-to/install/prod-intro.rst b/docs/src/how-to/install/prod-intro.rst deleted file mode 100644 index 420b5fc296..0000000000 --- a/docs/src/how-to/install/prod-intro.rst +++ /dev/null @@ -1,60 +0,0 @@ -Introduction -============= - -.. warning:: - - It is *strongly recommended* to have followed and completed the demo installation :ref:`helm` before continuing with this page. The demo installation is simpler, and already makes you aware of a few things you need (TLS certs, DNS, a VM, ...). - -.. note:: - All required dependencies for doing an installation can be found here :ref:`dependencies`. - -A production installation consists of several parts: - -Part 1 - you're on your own here, and need to create a set of VMs as detailed in :ref:`planning_prod` - -Part 2 (:ref:`ansible_vms`) deals with installing components directly on a set of virtual machines, such as kubernetes itself, as well as databases. It makes use of ansible to achieve that. - -Part 3 (:ref:`helm_prod`) is similar to the demo installation, and uses the tool ``helm`` to install software on top of kubernetes. - -Part 4 (:ref:`configuration_options`) details other possible configuration options and settings to fit your needs. - -What will be installed by following these parts? -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -- highly-available and persistent databases (cassandra, elasticsearch) -- kubernetes -- restund (audio/video calling) servers ( see also :ref:`understand-restund`) -- wire-server (API) - - user accounts, authentication, conversations - - assets handling (images, files, ...) - - notifications over websocket - - single-sign-on with SAML - -- wire-webapp - - - fully functioning web client (like ``https://app.wire.com``) - -- wire-account-pages - - - user account management (a few pages relating to e.g. password reset) - -What will not be installed? -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -- notifications over native push notification via `FCM `__/`APNS `__ - -What will not be installed by default? -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -- 3rd party proxying - requires accounts with third-party providers -- team-settings page for team management (including invitations, requires access to a private repository - get in touch with us for access) - -Getting support -^^^^^^^^^^^^^^^^ - -`Get in touch `__. - -Next steps for high-available production installation -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Your next step will be part 2, :ref:`ansible_vms` diff --git a/docs/src/how-to/install/restund.md b/docs/src/how-to/install/restund.md new file mode 100644 index 0000000000..90a616b105 --- /dev/null +++ b/docs/src/how-to/install/restund.md @@ -0,0 +1,80 @@ +(install-restund)= + +# Installing Restund + +## Background + +Restund servers allow two users on different networks to have a Wire audio or video call. + +Please refer to the following {ref}`section to better understand Restund and how it works `. + +## Installation instructions + +To Install Restund, do the following: + +1. In your `hosts.ini` file, in the `[restund:vars]` section, set + the `restund_network_interface` to the name of the interface + you want restund to talk to clients on. This value defaults to the + `default_ipv4_address`, with a fallback to `eth0`. +2. (optional) `restund_peer_udp_advertise_addr=Y.Y.Y.Y`: set this to + the IP to advertise for other restund servers if different than the + ip on the 'restund_network_interface'. If using + 'restund_peer_udp_advertise_addr', make sure that UDP (!) traffic + from any restund server (including itself) can reach that IP (for + `restund <-> restund` communication). This should only be necessary + if you're installing restund on a VM that is reachable on a public IP + address but the process cannot bind to that public IP address + directly (e.g. on AWS VPC VM). If unset, `restund <-> restund` UDP + traffic will default to the IP in the `restund_network_interface`. + +```ini +[all] +(...) +restund01 ansible_host=X.X.X.X + +(...) + +[all:vars] +## Set the network interface name for restund to bind to if you have more than one network interface +## If unset, defaults to the ansible_default_ipv4 (if defined) otherwise to eth0 +restund_network_interface = eth0 + +(see `defaults/main.yml `__ for a full list of variables to change if necessary) +``` + +3. Place a copy of the PEM formatted certificate and key you are going + to use for TLS communication to the restund server in + `/tmp/tls_cert_and_priv_key.pem`. Remove it after you have + completed deploying restund with ansible. +4. Use Ansible to actually install using the restund playbook: + +```bash +ansible-playbook -i hosts.ini restund.yml -vv +``` + +For information on setting up and using ansible-playbook to install Wire components, see {ref}`this page `. + +### Private Subnets + +By default, Restund is configured with a firewall that filters-out CIDR networks. + +If you need to enable Restund to connect to a CIDR addressed host or network, you can specify a list of private subnets in [CIDR format](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing), which will override Restund's firewall's default settings of filtering-out CIDR networks. + +You do this by setting the `restund_allowed_private_network_cidrs` option of the `[restund:vars]` section of the ansible inventory file ([for example this file](https://github.com/wireapp/wire-server-deploy/blob/master/ansible/inventory/prod/hosts.example.ini#L72)): + +```ini +[restund:vars] +## Set the network interface name for restund to bind to if you have more than one network interface +## If unset, defaults to the ansible_default_ipv4 (if defined) otherwise to eth0 +# restund_network_interface = eth0 +restund_allowed_private_network_cidrs=192.168.0.1/32 +``` + +This is needed, for example, to allow talking to the logging server if it is on a separate network: + +The private subnets only need to override the RFC-defined private networks, which Wire firewalls off by default: + +- 192.168.x.x +- 10.x.x.x +- 172.16.x.x - 172.31.x.x +- Etc... diff --git a/docs/src/how-to/install/restund.rst b/docs/src/how-to/install/restund.rst deleted file mode 100644 index 732f0d0e26..0000000000 --- a/docs/src/how-to/install/restund.rst +++ /dev/null @@ -1,88 +0,0 @@ -.. _install-restund: - -Installing Restund -================== - -Background -~~~~~~~~~~ - -Restund servers allow two users on different networks to have a Wire audio or video call. - -Please refer to the following :ref:`section to better understand Restund and how it works `. - -Installation instructions -~~~~~~~~~~~~~~~~~~~~~~~~~ - -To Install Restund, do the following: - - -1. In your ``hosts.ini`` file, in the ``[restund:vars]`` section, set - the ``restund_network_interface`` to the name of the interface - you want restund to talk to clients on. This value defaults to the - ``default_ipv4_address``, with a fallback to ``eth0``. - -2. (optional) ``restund_peer_udp_advertise_addr=Y.Y.Y.Y``: set this to - the IP to advertise for other restund servers if different than the - ip on the 'restund_network_interface'. If using - 'restund_peer_udp_advertise_addr', make sure that UDP (!) traffic - from any restund server (including itself) can reach that IP (for - ``restund <-> restund`` communication). This should only be necessary - if you're installing restund on a VM that is reachable on a public IP - address but the process cannot bind to that public IP address - directly (e.g. on AWS VPC VM). If unset, ``restund <-> restund`` UDP - traffic will default to the IP in the ``restund_network_interface``. - -.. code:: ini - - [all] - (...) - restund01 ansible_host=X.X.X.X - - (...) - - [all:vars] - ## Set the network interface name for restund to bind to if you have more than one network interface - ## If unset, defaults to the ansible_default_ipv4 (if defined) otherwise to eth0 - restund_network_interface = eth0 - - (see `defaults/main.yml `__ for a full list of variables to change if necessary) - -3. Place a copy of the PEM formatted certificate and key you are going - to use for TLS communication to the restund server in - ``/tmp/tls_cert_and_priv_key.pem``. Remove it after you have - completed deploying restund with ansible. - -4. Use Ansible to actually install using the restund playbook: - -.. code:: bash - - ansible-playbook -i hosts.ini restund.yml -vv - -For information on setting up and using ansible-playbook to install Wire components, see :ref:`this page `. - -Private Subnets ---------------- - -By default, Restund is configured with a firewall that filters-out CIDR networks. - -If you need to enable Restund to connect to a CIDR addressed host or network, you can specify a list of private subnets in `CIDR format `__, which will override Restund's firewall's default settings of filtering-out CIDR networks. - -You do this by setting the ``restund_allowed_private_network_cidrs`` option of the ``[restund:vars]`` section of the ansible inventory file (`for example this file `__): - -.. code:: ini - - [restund:vars] - ## Set the network interface name for restund to bind to if you have more than one network interface - ## If unset, defaults to the ansible_default_ipv4 (if defined) otherwise to eth0 - # restund_network_interface = eth0 - restund_allowed_private_network_cidrs=192.168.0.1/32 - -This is needed, for example, to allow talking to the logging server if it is on a separate network: - -The private subnets only need to override the RFC-defined private networks, which Wire firewalls off by default: - -* 192.168.x.x -* 10.x.x.x -* 172.16.x.x - 172.31.x.x -* Etc... - diff --git a/docs/src/how-to/install/sft.rst b/docs/src/how-to/install/sft.md similarity index 67% rename from docs/src/how-to/install/sft.rst rename to docs/src/how-to/install/sft.md index 2824d6827a..e4560c7216 100644 --- a/docs/src/how-to/install/sft.rst +++ b/docs/src/how-to/install/sft.md @@ -1,125 +1,116 @@ -.. _install-sft: +(install-sft)= -Installing Conference Calling 2.0 (aka SFT) -=========================================== +# Installing Conference Calling 2.0 (aka SFT) -Background -~~~~~~~~~~ +## Background -Please refer to the following :ref:`section to better understand SFT and how it works `. +Please refer to the following {ref}`section to better understand SFT and how it works `. +### As part of the wire-server umbrella chart -As part of the wire-server umbrella chart ------------------------------------------ +`` sftd` `` will be installed as part of the `wire-server` umbrella chart if you set `tags.sftd: true` -`sftd`` will be installed as part of the ``wire-server`` umbrella chart if you set `tags.sftd: true` +In your `./values/wire-server/values.yaml` file you should set the following settings: -In your ``./values/wire-server/values.yaml`` file you should set the following settings: +```yaml +tags: + sftd: true -.. code:: yaml +sftd: + host: sftd.example.com # Replace example.com with your domain + allowOrigin: webapp.example.com # Should be the address you used for the webapp deployment +``` - tags: - sftd: true +In your `secrets.yaml` you should set the TLS keys for sftd domain: - sftd: - host: sftd.example.com # Replace example.com with your domain - allowOrigin: webapp.example.com # Should be the address you used for the webapp deployment +```yaml +sftd: + tls: + crt: | + + key: | + +``` -In your ``secrets.yaml`` you should set the TLS keys for sftd domain: +You should also make sure that you configure brig to know about the SFT server in your `./values/wire-server/values.yaml` file: -.. code:: yaml - - sftd: - tls: - crt: | - - key: | - - -You should also make sure that you configure brig to know about the SFT server in your ``./values/wire-server/values.yaml`` file: - -.. code:: yaml - - brig: - optSettings: - setSftStaticUrl: "https://sftd.example.com:443" +```yaml +brig: + optSettings: + setSftStaticUrl: "https://sftd.example.com:443" +``` Now you can deploy as usual: -.. code:: shell +```shell +helm upgrade wire-server wire/wire-server --values ./values/wire-server/values.yaml +``` - helm upgrade wire-server wire/wire-server --values ./values/wire-server/values.yaml - - -Standalone ----------- +### Standalone The SFT component is also shipped as a separate helm chart. Installation is similar to installing -the charts as in :ref:`helm_prod`. +the charts as in {ref}`helm-prod`. Some people might want to run SFT separately, because the deployment lifecycle for the SFT is a bit more intricate. For example, -if you want to avoid dropping calls during an upgrade, you'd set the ``terminationGracePeriodSeconds`` of the SFT to a high number, to wait -for calls to drain before updating to the new version (See `technical documentation `__). that would cause your otherwise snappy upgrade of the ``wire-server`` chart to now take a long time, as it waits for all -the SFT servers to drain. If this is a concern for you, we advice installing ``sftd`` as a separate chart. - -It is important that you disable ``sftd`` in the ``wire-server`` umbrella chart, by setting this in your ``./values/wire-server/values.yaml`` file +if you want to avoid dropping calls during an upgrade, you'd set the `terminationGracePeriodSeconds` of the SFT to a high number, to wait +for calls to drain before updating to the new version (See [technical documentation](https://github.com/wireapp/wire-server/blob/develop/charts/sftd/README.md)). that would cause your otherwise snappy upgrade of the `wire-server` chart to now take a long time, as it waits for all +the SFT servers to drain. If this is a concern for you, we advice installing `sftd` as a separate chart. -.. code:: yaml +It is important that you disable `sftd` in the `wire-server` umbrella chart, by setting this in your `./values/wire-server/values.yaml` file - tags: - sftd: false +```yaml +tags: + sftd: false +``` +By default `sftd` doesn't need to set that many options, so we define them inline. However, you could of course also set these values in a `values.yaml` file. -By default ``sftd`` doesn't need to set that many options, so we define them inline. However, you could of course also set these values in a ``values.yaml`` file. +SFT will deploy a Kubernetes Ingress on `$SFTD_HOST`. Make sure that the domain name `$SFTD_HOST` points to your ingress IP as set up in {ref}`helm-prod`. The SFT also needs to be made aware of the domain name of the webapp that you set up in {ref}`helm-prod` for setting up the appropriate CSP headers. -SFT will deploy a Kubernetes Ingress on ``$SFTD_HOST``. Make sure that the domain name ``$SFTD_HOST`` points to your ingress IP as set up in :ref:`helm_prod`. The SFT also needs to be made aware of the domain name of the webapp that you set up in :ref:`helm_prod` for setting up the appropriate CSP headers. - -.. code:: shell - - export SFTD_HOST=sftd.example.com - export WEBAPP_HOST=webapp.example.com +```shell +export SFTD_HOST=sftd.example.com +export WEBAPP_HOST=webapp.example.com +``` Now you can install the chart: -.. code:: shell - - helm upgrade --install sftd wire/sftd --set - helm install sftd wire/sftd \ - --set host=$SFTD_HOST \ - --set allowOrigin=https://$WEBAPP_HOST \ - --set-file tls.crt=/path/to/tls.crt \ - --set-file tls.key=/path/to/tls.key - -You should also make sure that you configure brig to know about the SFT server, in the ``./values/wire-server/values.yaml`` file: +```shell +helm upgrade --install sftd wire/sftd --set +helm install sftd wire/sftd \ + --set host=$SFTD_HOST \ + --set allowOrigin=https://$WEBAPP_HOST \ + --set-file tls.crt=/path/to/tls.crt \ + --set-file tls.key=/path/to/tls.key +``` -.. code:: yaml +You should also make sure that you configure brig to know about the SFT server, in the `./values/wire-server/values.yaml` file: - brig: - optSettings: - setSftStaticUrl: "https://sftd.example.com:443" +```yaml +brig: + optSettings: + setSftStaticUrl: "https://sftd.example.com:443" +``` -And then roll-out the change to the ``wire-server`` chart +And then roll-out the change to the `wire-server` chart -.. code:: shell +```shell +helm upgrade wire-server wire/wire-server --values ./values/wire-server/values.yaml +``` - helm upgrade wire-server wire/wire-server --values ./values/wire-server/values.yaml +For more advanced setups please refer to the [technical documentation](https://github.com/wireapp/wire-server/blob/develop/charts/sftd/README.md). -For more advanced setups please refer to the `technical documentation `__. +(install-sft-firewall-rules)= +### Firewall rules -.. _install-sft-firewall-rules: - -Firewall rules --------------- - -The SFT allocates media addresses in the UDP :ref:`default port range `. Ingress and +The SFT allocates media addresses in the UDP {ref}`default port range `. Ingress and egress traffic should be allowed for this range. Furthermore the SFT needs to be -able to reach the :ref:`Restund server `, as it uses STUN and TURN in cases the client +able to reach the {ref}`Restund server `, as it uses STUN and TURN in cases the client can not directly connect to the SFT. In practise this means the SFT should -allow ingress and egress traffic on the UDP :ref:`default port range ` from and -to both, clients and :ref:`Restund servers `. +allow ingress and egress traffic on the UDP {ref}`default port range ` from and +to both, clients and {ref}`Restund servers `. -*For more information on this port range, how to read and change it, and how to configure your firewall, please see* :ref:`this note `. +*For more information on this port range, how to read and change it, and how to configure your firewall, please see* {ref}`this note `. The SFT also has an HTTP interface for initializing (allocation) or joining (signaling) a call. This is exposed through the ingress controller as an HTTPS service. @@ -131,6 +122,7 @@ An SFT instance does **not** communicate with other SFT instances, TURN does tal Recapitulation table: +```{eval-rst} +----------------------------+-------------+-------------+-----------+----------+-----------------------------------------------------------------------------+--------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Name | Origin | Destination | Direction | Protocol | Ports | Action (Policy) | Description | +============================+=============+=============+===========+==========+=============================================================================+======================================+===============================================================================================================================================================================================+ @@ -146,6 +138,6 @@ Recapitulation table: +----------------------------+-------------+-------------+-----------+----------+-----------------------------------------------------------------------------+--------------------------------------+ | | Allowing SFT media egress | Here | Anny | Outgoing | UDP | 32768-61000 | Allow | | +----------------------------+-------------+-------------+-----------+----------+-----------------------------------------------------------------------------+--------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +``` - -*For more information, please refer to the source code of the Ansible role:* `sft-server `__. +*For more information, please refer to the source code of the Ansible role:* [sft-server](https://github.com/wireapp/ansible-sft/blob/develop/roles/sft-server/tasks/traffic.yml). diff --git a/docs/src/how-to/install/tls.md b/docs/src/how-to/install/tls.md new file mode 100644 index 0000000000..f3a044597a --- /dev/null +++ b/docs/src/how-to/install/tls.md @@ -0,0 +1,52 @@ +(tls)= + +# Configure TLS ciphers + +The following table lists recommended ciphers for TLS server setups, which should be used in wire deployments. + +| Cipher | Version | Wire default | [BSI TR-02102-2] | [Mozilla TLS Guideline] | +| ----------------------------- | ------- | ------------ | ---------------- | ----------------------- | +| ECDHE-ECDSA-AES128-GCM-SHA256 | TLSv1.2 | no | **yes** | intermediate | +| ECDHE-RSA-AES128-GCM-SHA256 | TLSv1.2 | no | **yes** | intermediate | +| ECDHE-ECDSA-AES256-GCM-SHA384 | TLSv1.2 | **yes** | **yes** | intermediate | +| ECDHE-RSA-AES256-GCM-SHA384 | TLSv1.2 | **yes** | **yes** | intermediate | +| ECDHE-ECDSA-CHACHA20-POLY1305 | TLSv1.2 | no | no | intermediate | +| ECDHE-RSA-CHACHA20-POLY1305 | TLSv1.2 | no | no | intermediate | +| TLS_AES_128_GCM_SHA256 | TLSv1.3 | **yes** | **yes** | **modern** | +| TLS_AES_256_GCM_SHA384 | TLSv1.3 | **yes** | **yes** | **modern** | +| TLS_CHACHA20_POLY1305_SHA256 | TLSv1.3 | no | no | **modern** | + +```{note} +If you enable TLSv1.3, openssl does always enable the three default cipher suites for TLSv1.3. +Therefore it is not necessary to add them to openssl based configurations. +``` + +(ingress-traffic)= + +## Ingress Traffic (wire-server) + +The list of TLS ciphers for incoming requests is limited by default to the [following](https://github.com/wireapp/wire-server/blob/master/charts/nginx-ingress-controller/values.yaml#L7) (for general server-certificates, both for federation and client API), and can be overridden on your installation if needed. + +## Egress Traffic (wire-server/federation) + +The list of TLS ciphers for outgoing federation requests is currently hardcoded, the list is [here](https://github.com/wireapp/wire-server/blob/master/services/federator/src/Federator/Remote.hs#L164-L180). + +## SFTD (ansible) + +The list of TLS ciphers for incoming SFT requests (and metrics) are defined in ansible templates [sftd.vhost.conf.j2](https://github.com/wireapp/ansible-sft/blob/develop/roles/sft-server/templates/sftd.vhost.conf.j2#L19) and [metrics.vhost.conf.j2](https://github.com/wireapp/ansible-sft/blob/develop/roles/sft-server/templates/metrics.vhost.conf.j2#L13). + +## SFTD (kubernetes) + +SFTD deployed via kubernetes uses `kubernetes.io/ingress` for ingress traffic, configured in [ingress.yaml](https://github.com/wireapp/wire-server/blob/develop/charts/sftd/templates/ingress.yaml). +Kubernetes based deployments make use of the settings from {ref}`ingress-traffic`. + +## Restund (ansible) + +The list of TLS ciphers for "TLS over TCP" TURN (and metrics) are defined in ansible templates [nginx-stream.conf.j2](https://github.com/wireapp/ansible-restund/blob/master/templates/nginx-stream.conf.j2#L25) and [nginx-metrics.conf.j2](https://github.com/wireapp/ansible-restund/blob/master/templates/nginx-metrics.conf.j2#L15). + +## Restund (kubernetes) + +[Kubernetes restund](https://github.com/wireapp/wire-server/tree/develop/charts/restund) deployment does not provide TLS connectivity. + +[bsi tr-02102-2]: https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/Publications/TechGuidelines/TG02102/BSI-TR-02102-2.pdf +[mozilla tls guideline]: https://wiki.mozilla.org/Security/Server_Side_TLS diff --git a/docs/src/how-to/install/tls.rst b/docs/src/how-to/install/tls.rst deleted file mode 100644 index 8adac3d525..0000000000 --- a/docs/src/how-to/install/tls.rst +++ /dev/null @@ -1,60 +0,0 @@ -.. _tls: - -Configure TLS ciphers -======================= - -The following table lists recommended ciphers for TLS server setups, which should be used in wire deployments. - - -============================= ======= ============ ================= ======================== -Cipher Version Wire default `BSI TR-02102-2`_ `Mozilla TLS Guideline`_ -============================= ======= ============ ================= ======================== -ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 no **yes** intermediate -ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 no **yes** intermediate -ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 **yes** **yes** intermediate -ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 **yes** **yes** intermediate -ECDHE-ECDSA-CHACHA20-POLY1305 TLSv1.2 no no intermediate -ECDHE-RSA-CHACHA20-POLY1305 TLSv1.2 no no intermediate -TLS_AES_128_GCM_SHA256 TLSv1.3 **yes** **yes** **modern** -TLS_AES_256_GCM_SHA384 TLSv1.3 **yes** **yes** **modern** -TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 no no **modern** -============================= ======= ============ ================= ======================== - - -.. _bsi tr-02102-2: https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/Publications/TechGuidelines/TG02102/BSI-TR-02102-2.pdf -.. _mozilla tls guideline: https://wiki.mozilla.org/Security/Server_Side_TLS - -.. note:: - If you enable TLSv1.3, openssl does always enable the three default cipher suites for TLSv1.3. - Therefore it is not necessary to add them to openssl based configurations. - -.. _ingress traffic: - -Ingress Traffic (wire-server) ------------------------------ -The list of TLS ciphers for incoming requests is limited by default to the `following `_ (for general server-certificates, both for federation and client API), and can be overridden on your installation if needed. - - -Egress Traffic (wire-server/federation) ---------------------------------------- -The list of TLS ciphers for outgoing federation requests is currently hardcoded, the list is `here `_. - - -SFTD (ansible) --------------- -The list of TLS ciphers for incoming SFT requests (and metrics) are defined in ansible templates `sftd.vhost.conf.j2 `_ and `metrics.vhost.conf.j2 `_. - -SFTD (kubernetes) ------------------ -SFTD deployed via kubernetes uses ``kubernetes.io/ingress`` for ingress traffic, configured in `ingress.yaml `_. -Kubernetes based deployments make use of the settings from :ref:`ingress traffic`. - - -Restund (ansible) ------------------ - -The list of TLS ciphers for "TLS over TCP" TURN (and metrics) are defined in ansible templates `nginx-stream.conf.j2 `_ and `nginx-metrics.conf.j2 `_. - -Restund (kubernetes) --------------------- -`Kubernetes restund `_ deployment does not provide TLS connectivity. diff --git a/docs/src/how-to/install/troubleshooting.md b/docs/src/how-to/install/troubleshooting.md new file mode 100644 index 0000000000..7aa9f80479 --- /dev/null +++ b/docs/src/how-to/install/troubleshooting.md @@ -0,0 +1,265 @@ +# Troubleshooting during installation + +## Problems with CORS on the web based applications (webapp, team-settings, account-pages) + +If you have installed wire-server, but the web application page in your browser has connection problems and throws errors in the console such as `"Refused to connect to 'https://assets.example.com' because it violates the following Content Security Policies"`, make sure to check that you have configured the `CSP_EXTRA_` environment variables. + +In the file that you use as override when running `helm install/update -f ` (using the webapp as an example): + +```yaml +webapp: + # ... other settings... + envVars: + # ... other environment variables ... + CSP_EXTRA_CONNECT_SRC: "https://*.example.com, wss://*.example.com" + CSP_EXTRA_IMG_SRC: "https://*.example.com" + CSP_EXTRA_SCRIPT_SRC: "https://*.example.com" + CSP_EXTRA_DEFAULT_SRC: "https://*.example.com" + CSP_EXTRA_FONT_SRC: "https://*.example.com" + CSP_EXTRA_FRAME_SRC: "https://*.example.com" + CSP_EXTRA_MANIFEST_SRC: "https://*.example.com" + CSP_EXTRA_OBJECT_SRC: "https://*.example.com" + CSP_EXTRA_MEDIA_SRC: "https://*.example.com" + CSP_EXTRA_PREFETCH_SRC: "https://*.example.com" + CSP_EXTRA_STYLE_SRC: "https://*.example.com" + CSP_EXTRA_WORKER_SRC: "https://*.example.com" +``` + +For more info, you can have a look at respective charts values files, i.e.: + +> - [charts/account-pages/values.yaml](https://github.com/wireapp/wire-server/blob/develop/charts/account-pages/values.yaml) +> - [charts/team-settings/values.yaml](https://github.com/wireapp/wire-server/blob/develop/charts/team-settings/values.yaml) +> - [charts/webapp/values.yaml](https://github.com/wireapp/wire-server/blob/develop/charts/webapp/values.yaml) + +## Problems with ansible and python versions + +If for instance the following fails: + +``` +ansible all -i hosts.ini -m shell -a "echo hello" +``` + +If your target machine only has python 3 (not python 2.7), you can tell ansible to use python 3 by default, by specifying `ansible_python_interpreter`: + +```ini +# hosts.ini + +[all] +server1 ansible_host=1.2.3.4 + + +[all:vars] +ansible_python_interpreter=/usr/bin/python3 +``` + +(python 3 may not be supported by all ansible modules yet) + +## Flaky issues with Cassandra (failed QUORUMs, etc.) + +Cassandra is *very* picky about time! Ensure that NTP is properly set up on all nodes. Particularly for Cassandra *DO NOT* use anything else other than ntp. Here are some helpful blogs that explain why: + +> - +> - +> - + +How can I ensure that I have correctly setup NTP on my machine(s)? Have a look at [this ansible playbook](https://github.com/wireapp/wire-server-deploy/blob/develop/ansible/cassandra-verify-ntp.yml) + +## I deployed `demo-smtp` but I'm not receiving any verification emails + +1. Check whether brig deployed successfully (brig pod(s) should be in state *Running*) + + ``` + kubectl get pods -o wide + ``` + +2. Inspect Brig logs + + ``` + kubectl logs $BRING_POD_NAME + ``` + +3. The receiving email server might refuse to accept any email sent by the `demo-smtp` server, due to not being + a trusted origin. You may want to set up one of the following email verification mechanisms. + +- [SFP](https://en.wikipedia.org/wiki/Sender_Policy_Framework) +- [DKIM](https://en.wikipedia.org/wiki/DomainKeys_Identified_Mail) +- [DMARC](https://en.wikipedia.org/wiki/DMARC) + +4. You may want to adjust the SMTP configuration for Brig (`wire-server/[values,secrets].yaml`). + +```yaml +brig: + config: + smtp: + host: 'demo-smtp' + port: 25 + connType: 'plain' +``` + +```yaml +brig: + secrets: + smtpPassword: dummyPassword +``` + +(Don't forget to apply the changes with `helm upgrade wire-server wire/wire-server -f values.yaml -f secrets.yaml`) + +## I deployed `demo-smtp` and I want to skip email configuration and retrieve verification codes directly + +If the only thing you need demo-smtp for is sending yourself verification codes to create a test account, it might be simpler and faster to just skip SMTP configuration, and simply retrieve the code internally right after it is sent, while it is in the outbound email queue. + +To do this, click create a user/account/team, or if you already have, click on `Resend Code`: + +```{figure} img/code-input.png +The code input interface +``` + +Then run the following command + +``` +kubectl exec $(kubectl get pod -lapp=demo-smtp | grep demo | awk '{print $1;}') -- sh -c 'cat /var/spool/exim4/input/* | grep -Po "^\\d{6}$" ' +``` + +Or step by step: + +1. Get the name of the pod + + ``` + kubectl get pod -lapp=demo-smtp + ``` + +Which will give you a result that looks something like this + +``` +> kubectl get pod -lapp=demo-smtp +NAME READY STATUS RESTARTS AGE +demo-smtp-85557f6877-qxk2p 1/1 Running 0 80m +``` + +In which case, the pod name is `demo-smtp-85557f6877-qxk2p`, which replaces \ in the next command. + +2. Then get the content of emails and extract the code + + ``` + kubectl exec -- sh -c 'head -n 15 /var/spool/exim4/input/* ' + ``` + +Which will give you the content of sent emails, including the code + +``` +> kubectl exec demo-smtp-85557f6877-qxk2p -- sh -c 'head -n 15 /var/spool/exim4/input/* ' +==> /var/spool/exim4/input/1mECxm-000068-28-D <== +1mECxm-000068-28-D +--Y3mymuwB5Y +Content-Type: text/plain; charset=utf-8 +Content-Transfer-Encoding: quoted-printable +[https://wire=2Ecom/p/img/email/logo-email-black=2Epng] +VERIFY YOUR EMAIL +myemail@gmail=2Ecom was used to register on Wire=2E Enter this code to v= +erify your email and create your account=2E +022515 +``` + +This means the code is `022515`, simply enter it in the interface. + +If the email has already been sent out, it's possible the queue will be empty. + +If that is the case, simply click the "Resend Code" link in the interface, then quickly re-send the command, a new email should now be present. + +## Obtaining Brig logs, and the format of different team/user events + +To obtain brig logs, simply run + +``` +kubectl logs $(kubectl get pods | grep brig | awk '{print $1;}' | head -n 1) +``` + +You will get log entries for various different types of events that happen, for example: + +1. User creation + + ``` + {"user":"24bdd52e-af33-400c-8e47-d16bf8695dbd","request":"c0575ff5a2d61bfc2be21e77260fccab","msgs":["I","Creating user"]} + ``` + +2. Activation key creation + + ``` + {"activation.code":"949721","activation.key":"p8o032Ljqhjgcea9R0AAnOeiUniGm63BrY9q_aeS1Cc=","request":"c0575ff5a2d61bfc2be21e77260fccab","msgs":["I","Activating"]} + ``` + +3. Activation of a new user + + ``` + {"user":"24bdd52e-af33-400c-8e47-d16bf8695dbd","request":"c0575ff5a2d61bfc2be21e77260fccab","msgs":["I","User activated"]} + ``` + +4. User indexing + + ``` + {"user":"24bdd52e-af33-400c-8e47-d16bf8695dbd","logger":"index.brig","msgs":["I","Indexing user"]} + ``` + +5. Team creation + + ``` + {"email_sha256":"a7ca34df62e3aa18e071e6bd4740009ce7a25278869badc1ad8f6afda792d427","team":"6ef03a2b-34b5-4b65-8d72-1e4fc7697553","user":"24bdd52e-af33-400c-8e47-d16bf8695dbd","module":"Brig.API.Public","fn":"Brig.API.Public.createUser","request":"c0575ff5a2d61bfc2be21e77260fccab","msgs":["I","Sucessfully created user"]} + ``` + +6. Invitation sent + + ``` + {"invitation_code":"hJuh1C1PzMkgtesAYZZ4SZrP5xO-xM_m","email_sha256":"eef48a690436699c653110387455a4afe93ce29febc348acd20f6605787956e6","team":"6ef03a2b-34b5-4b65-8d72-1e4fc7697553","module":"Brig.Team.API","fn":"Brig.Team.API.createInvitationPublic","request":"c43440074629d802a199464dd892cd92","msgs":["I","Succesfully created invitation"]} + ``` + +## Diagnosing and addressing bad network/disconnect issues + +### Diagnosis + +If you are experiencing bad network/disconnection issues, here is how to obtain the cause from the client log files: + +In the Web client, the connection state handler logs the disconnected state as reported by WebRTC as: + +``` +flow(...): connection_handler: disconnected, starting disconnect timer +``` + +On mobile, the output in the log is slightly different: + +``` +pf(...): ice connection state: Disconnected +``` + +And when the timer expires and the connection is not re-established: + +``` +ecall(...): mf_restart_handler: triggering restart due to network drop +``` + +If the attempt to reconnect then fails you will likely see the following: + +``` +ecall(...): connection timeout after 10000 milliseconds +``` + +If the connection to the SFT ({ref}`understand-sft`) server is considered lost due to missing ping messages from a non-functionning or delayed data channel or a failure to receive/decrypt media you will see: + +``` +ccall(...): reconnect +``` + +Then followed by these values: + +``` +cp: received CONFPART message YES/NO +da: decrypt attempted YES/NO +ds: decrypt successful YES/NO +att: number of reconnect attempts +p: the expected ping (how many pings have not returned) +``` + +### Configuration + +Question: Are the connection values for bad networks/disconnect configurable on on-prem? + +Answer: The values are not currently configurable, they are built into the clients at compile time, we do have a mechanism for sending calling configs to the clients but these values are not currently there. diff --git a/docs/src/how-to/install/troubleshooting.rst b/docs/src/how-to/install/troubleshooting.rst deleted file mode 100644 index 79adc61f52..0000000000 --- a/docs/src/how-to/install/troubleshooting.rst +++ /dev/null @@ -1,255 +0,0 @@ -Troubleshooting during installation -------------------------------------- - -Problems with CORS on the web based applications (webapp, team-settings, account-pages) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -If you have installed wire-server, but the web application page in your browser has connection problems and throws errors in the console such as `"Refused to connect to 'https://assets.example.com' because it violates the following Content Security Policies"`, make sure to check that you have configured the ``CSP_EXTRA_`` environment variables. - -In the file that you use as override when running ``helm install/update -f `` (using the webapp as an example): - -.. code:: yaml - - webapp: - # ... other settings... - envVars: - # ... other environment variables ... - CSP_EXTRA_CONNECT_SRC: "https://*.example.com, wss://*.example.com" - CSP_EXTRA_IMG_SRC: "https://*.example.com" - CSP_EXTRA_SCRIPT_SRC: "https://*.example.com" - CSP_EXTRA_DEFAULT_SRC: "https://*.example.com" - CSP_EXTRA_FONT_SRC: "https://*.example.com" - CSP_EXTRA_FRAME_SRC: "https://*.example.com" - CSP_EXTRA_MANIFEST_SRC: "https://*.example.com" - CSP_EXTRA_OBJECT_SRC: "https://*.example.com" - CSP_EXTRA_MEDIA_SRC: "https://*.example.com" - CSP_EXTRA_PREFETCH_SRC: "https://*.example.com" - CSP_EXTRA_STYLE_SRC: "https://*.example.com" - CSP_EXTRA_WORKER_SRC: "https://*.example.com" - -For more info, you can have a look at respective charts values files, i.e.: - - * `charts/account-pages/values.yaml `__ - * `charts/team-settings/values.yaml `__ - * `charts/webapp/values.yaml `__ - -Problems with ansible and python versions -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -If for instance the following fails:: - - ansible all -i hosts.ini -m shell -a "echo hello" - -If your target machine only has python 3 (not python 2.7), you can tell ansible to use python 3 by default, by specifying `ansible_python_interpreter`: - -.. code:: ini - - # hosts.ini - - [all] - server1 ansible_host=1.2.3.4 - - - [all:vars] - ansible_python_interpreter=/usr/bin/python3 - -(python 3 may not be supported by all ansible modules yet) - - -Flaky issues with Cassandra (failed QUORUMs, etc.) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Cassandra is *very* picky about time! Ensure that NTP is properly set up on all nodes. Particularly for Cassandra *DO NOT* use anything else other than ntp. Here are some helpful blogs that explain why: - - * https://blog.rapid7.com/2014/03/14/synchronizing-clocks-in-a-cassandra-cluster-pt-1-the-problem/ - * https://blog.rapid7.com/2014/03/17/synchronizing-clocks-in-a-cassandra-cluster-pt-2-solutions/ - * https://www.digitalocean.com/community/tutorials/how-to-set-up-time-synchronization-on-ubuntu-16-04 - -How can I ensure that I have correctly setup NTP on my machine(s)? Have a look at `this ansible playbook `_ - - -I deployed ``demo-smtp`` but I'm not receiving any verification emails -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -1. Check whether brig deployed successfully (brig pod(s) should be in state *Running*) :: - - kubectl get pods -o wide - -2. Inspect Brig logs :: - - kubectl logs $BRING_POD_NAME - -3. The receiving email server might refuse to accept any email sent by the `demo-smtp` server, due to not being - a trusted origin. You may want to set up one of the following email verification mechanisms. - -* `SFP `__ -* `DKIM `__ -* `DMARC `__ - - -4. You may want to adjust the SMTP configuration for Brig (``wire-server/[values,secrets].yaml``). - -.. code:: yaml - - brig: - config: - smtp: - host: 'demo-smtp' - port: 25 - connType: 'plain' - - -.. code:: yaml - - brig: - secrets: - smtpPassword: dummyPassword - -(Don't forget to apply the changes with ``helm upgrade wire-server wire/wire-server -f values.yaml -f secrets.yaml``) - -I deployed ``demo-smtp`` and I want to skip email configuration and retrieve verification codes directly -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -If the only thing you need demo-smtp for is sending yourself verification codes to create a test account, it might be simpler and faster to just skip SMTP configuration, and simply retrieve the code internally right after it is sent, while it is in the outbound email queue. - -To do this, click create a user/account/team, or if you already have, click on ``Resend Code``: - -.. figure:: img/code-input.png - - The code input interface - -Then run the following command :: - - kubectl exec $(kubectl get pod -lapp=demo-smtp | grep demo | awk '{print $1;}') -- sh -c 'cat /var/spool/exim4/input/* | grep -Po "^\\d{6}$" ' - -Or step by step: - -1. Get the name of the pod :: - - kubectl get pod -lapp=demo-smtp - -Which will give you a result that looks something like this :: - - > kubectl get pod -lapp=demo-smtp - NAME READY STATUS RESTARTS AGE - demo-smtp-85557f6877-qxk2p 1/1 Running 0 80m - -In which case, the pod name is ``demo-smtp-85557f6877-qxk2p``, which replaces in the next command. - -2. Then get the content of emails and extract the code :: - - kubectl exec -- sh -c 'head -n 15 /var/spool/exim4/input/* ' - -Which will give you the content of sent emails, including the code :: - - > kubectl exec demo-smtp-85557f6877-qxk2p -- sh -c 'head -n 15 /var/spool/exim4/input/* ' - ==> /var/spool/exim4/input/1mECxm-000068-28-D <== - 1mECxm-000068-28-D - --Y3mymuwB5Y - Content-Type: text/plain; charset=utf-8 - Content-Transfer-Encoding: quoted-printable - [https://wire=2Ecom/p/img/email/logo-email-black=2Epng] - VERIFY YOUR EMAIL - myemail@gmail=2Ecom was used to register on Wire=2E Enter this code to v= - erify your email and create your account=2E - 022515 - -This means the code is ``022515``, simply enter it in the interface. - -If the email has already been sent out, it's possible the queue will be empty. - -If that is the case, simply click the "Resend Code" link in the interface, then quickly re-send the command, a new email should now be present. - -Obtaining Brig logs, and the format of different team/user events -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -To obtain brig logs, simply run :: - - kubectl logs $(kubectl get pods | grep brig | awk '{print $1;}' | head -n 1) - -You will get log entries for various different types of events that happen, for example: - -1. User creation :: - - {"user":"24bdd52e-af33-400c-8e47-d16bf8695dbd","request":"c0575ff5a2d61bfc2be21e77260fccab","msgs":["I","Creating user"]} - -2. Activation key creation ::  - - {"activation.code":"949721","activation.key":"p8o032Ljqhjgcea9R0AAnOeiUniGm63BrY9q_aeS1Cc=","request":"c0575ff5a2d61bfc2be21e77260fccab","msgs":["I","Activating"]} - -3. Activation of a new user :: - - {"user":"24bdd52e-af33-400c-8e47-d16bf8695dbd","request":"c0575ff5a2d61bfc2be21e77260fccab","msgs":["I","User activated"]} - -4. User indexing :: - - {"user":"24bdd52e-af33-400c-8e47-d16bf8695dbd","logger":"index.brig","msgs":["I","Indexing user"]} - -5. Team creation ::  - - {"email_sha256":"a7ca34df62e3aa18e071e6bd4740009ce7a25278869badc1ad8f6afda792d427","team":"6ef03a2b-34b5-4b65-8d72-1e4fc7697553","user":"24bdd52e-af33-400c-8e47-d16bf8695dbd","module":"Brig.API.Public","fn":"Brig.API.Public.createUser","request":"c0575ff5a2d61bfc2be21e77260fccab","msgs":["I","Sucessfully created user"]} - -6. Invitation sent :: - - {"invitation_code":"hJuh1C1PzMkgtesAYZZ4SZrP5xO-xM_m","email_sha256":"eef48a690436699c653110387455a4afe93ce29febc348acd20f6605787956e6","team":"6ef03a2b-34b5-4b65-8d72-1e4fc7697553","module":"Brig.Team.API","fn":"Brig.Team.API.createInvitationPublic","request":"c43440074629d802a199464dd892cd92","msgs":["I","Succesfully created invitation"]} - -Diagnosing and addressing bad network/disconnect issues -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Diagnosis -========= - -If you are experiencing bad network/disconnection issues, here is how to obtain the cause from the client log files: - -In the Web client, the connection state handler logs the disconnected state as reported by WebRTC as: - -.. code:: - - flow(...): connection_handler: disconnected, starting disconnect timer - -On mobile, the output in the log is slightly different: - -.. code:: - - pf(...): ice connection state: Disconnected - -And when the timer expires and the connection is not re-established: - -.. code:: - - ecall(...): mf_restart_handler: triggering restart due to network drop - -If the attempt to reconnect then fails you will likely see the following: - -.. code:: - - ecall(...): connection timeout after 10000 milliseconds - -If the connection to the SFT (:ref:`understand-sft`) server is considered lost due to missing ping messages from a non-functionning or delayed data channel or a failure to receive/decrypt media you will see: - -.. code:: - - ccall(...): reconnect - -Then followed by these values: - -.. code:: - - cp: received CONFPART message YES/NO - da: decrypt attempted YES/NO - ds: decrypt successful YES/NO - att: number of reconnect attempts - p: the expected ping (how many pings have not returned) - -Configuration -============= - -Question: Are the connection values for bad networks/disconnect configurable on on-prem? - -Answer: The values are not currently configurable, they are built into the clients at compile time, we do have a mechanism for sending calling configs to the clients but these values are not currently there. - - - - - - diff --git a/docs/src/how-to/install/version-requirements.md b/docs/src/how-to/install/version-requirements.md new file mode 100644 index 0000000000..dc9cccc8f9 --- /dev/null +++ b/docs/src/how-to/install/version-requirements.md @@ -0,0 +1,28 @@ +# Required/Supported versions + +*Updated: 26.04.2021* + +```{warning} +If you already installed Wire by using `poetry`, please refer to the +[old version](https://docs.wire.com/versions/install-with-poetry/how-to/index.html) of +the installation guide. +``` + +## Persistence + +- Cassandra: 3.11 (OpenJDK 8) +- Elasticsearch: 6.6.0 +- Minio + : - server: latest (tested v2020-03-25) + - client: latest (tested v2020-03-14) + +### Infrastructure + +- Ubuntu: 18.04 +- Docker: latest +- Kubernetes: 1.19.7 + +### Automation + +- Ansible: 2.9 +- Helm: >= v3 diff --git a/docs/src/how-to/install/version-requirements.rst b/docs/src/how-to/install/version-requirements.rst deleted file mode 100644 index 3c204404bb..0000000000 --- a/docs/src/how-to/install/version-requirements.rst +++ /dev/null @@ -1,35 +0,0 @@ -Required/Supported versions -=========================== - -*Updated: 26.04.2021* - -.. warning:: - - If you already installed Wire by using ``poetry``, please refer to the - `old version `__ of - the installation guide. - - -Persistence -~~~~~~~~~~~ - -- Cassandra: 3.11 (OpenJDK 8) -- Elasticsearch: 6.6.0 -- Minio - - server: latest (tested v2020-03-25) - - client: latest (tested v2020-03-14) - - -Infrastructure --------------- - -- Ubuntu: 18.04 -- Docker: latest -- Kubernetes: 1.19.7 - - -Automation ----------- - -- Ansible: 2.9 -- Helm: >= v3 diff --git a/docs/src/how-to/post-install/index.rst b/docs/src/how-to/post-install/index.rst deleted file mode 100644 index 4a7420aa23..0000000000 --- a/docs/src/how-to/post-install/index.rst +++ /dev/null @@ -1,15 +0,0 @@ -.. _checks: - -Verifying your wire-server installation -======================================= - -After a successful installation of wire-server and its components, there are some useful checks to be run to ensure the proper functioning of the system. Here's a non-exhaustive list of checks to run on the hosts: - -NOTE: This page is a work in progress, more sections to be added soon. - -.. toctree:: - :maxdepth: 1 - :glob: - - Verifying NTP - Verifying data retention for logs don't exceed 72 hours diff --git a/docs/src/how-to/post-install/logrotation-check.rst b/docs/src/how-to/post-install/logrotation-check.rst deleted file mode 100644 index 6094d6d3a3..0000000000 --- a/docs/src/how-to/post-install/logrotation-check.rst +++ /dev/null @@ -1,79 +0,0 @@ -.. _logrotation-check: - -Logs and Data Protection checks -=============================== - -On Wire.com, we keep logs for a maximum of 72 hours as described in the `privacy whitepaper `_ - -We recommend you do the same and limit the amount of logs kept on your servers. - -How can I see how far in the past access logs are still available on my servers? --------------------------------------------------------------------------------- - -Look at the timestamps of your earliest nginz logs: - -.. code:: sh - - export NAMESPACE=default # this may be 'default' or 'wire' - kubectl -n "$NAMESPACE" get pods | grep nginz - # choose one of the resulting names, it might be named e.g. nginz-6d75755c5c-h9fwn - kubectl -n "$NAMESPACE" logs -c nginz | head -10 - -If the timestamp is more than 3 days in the past, your logs are kept for unnecessary long amount of time and you should configure log rotation. - -I used your ansible scripts and prefer to have the default 72 hour maximum log availability configured automatically. -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -You can use `the kubernetes_logging.yml ansible playbook `_ - -I am not using ansible and like to SSH into hosts and configure things manually -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -SSH into one of your kubernetes worker machines. - -If you installed as per the instructions on docs.wire.com, then the default logging strategy is ``json-file`` with ``--log-opt max-size=50m --log-opt max-file=5`` storing logs in files under ``/var/lib/docker/containers//.log``. You can check this with these commands: - -.. code:: sh - - docker info --format '{{.LoggingDriver}}' - ps aux | grep log-opt - -(Options configured in ``/etc/systemd/system/docker.service.d/docker-options.conf``) - -The default will thus keep your logs around until reaching 250 MB per pod, which is far longer than three days. Since docker logs don't allow a time-based log rotation, we can instead make use of `logrotate `__ to rotate logs for us. - -Create the file ``/etc/logrotate.d/podlogs`` with the following contents: - -.. - NOTE: in case you change these docs, also make sure to update the actual code - under https://github.com/wireapp/wire-server-deploy/blob/develop/ansible/kubernetes_logging.yml -.. code:: - - "/var/lib/docker/containers/*/*.log" - { - daily - missingok - rotate 2 - maxage 1 - copytruncate - nocreate - nocompress - } - -Repeat the same for all the other kubernetes worker machines, the file needs to exist on all of them. - -There should already be a cron job for logrotate for other parts of the system, so this should be sufficent, you can stop here. - -You can check for the cron job with:: - - ls /etc/cron.daily/logrotate - -And you can manually run a log rotation using:: - - /usr/sbin/logrotate -v /etc/logrotate.conf - -If you want to clear out old logs entirely now, you can force log rotation three times (again, on all kubernetes machines):: - - /usr/sbin/logrotate -v -f /etc/logrotate.conf - /usr/sbin/logrotate -v -f /etc/logrotate.conf - /usr/sbin/logrotate -v -f /etc/logrotate.conf diff --git a/docs/src/how-to/post-install/ntp-check.rst b/docs/src/how-to/post-install/ntp-check.rst deleted file mode 100644 index 09b3852e62..0000000000 --- a/docs/src/how-to/post-install/ntp-check.rst +++ /dev/null @@ -1,48 +0,0 @@ -.. _ntp-check: - -NTP Checks -========== - -Ensure that NTP is properly set up on all nodes. Particularly for Cassandra **DO NOT** use anything else other than ntp. Here are some helpful blogs that explain why: - - * https://blog.rapid7.com/2014/03/14/synchronizing-clocks-in-a-cassandra-cluster-pt-1-the-problem/ - * https://www.digitalocean.com/community/tutorials/how-to-set-up-time-synchronization-on-ubuntu-16-04 - -How can I see if NTP is correctly set up? ------------------------------------------ - -This is an important part of your setup, particularly for your Cassandra nodes. You should use `ntpd` and our ansible scripts to ensure it is installed correctly - but you can still check it manually if you prefer. The following 2 sub-sections explain both approaches. - -I used your ansible scripts and prefer to have automated checks -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Then the easiest way is to use `this ansible playbook `_ - -I am not using ansible and like to SSH into hosts and checking things manually -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The following shows how to check for existing servers connected to (assumes `ntpq` is installed) - -.. code:: sh - - ntpq -pn - -which should yield something like this: - -.. code:: sh - - remote refid st t when poll reach delay offset jitter - ============================================================================== - time.example. .POOL. 16 p - 64 0 0.000 0.000 0.000 - + 2 u 498 512 377 0.759 0.039 0.081 - * 2 u 412 512 377 1.251 -0.670 0.063 - -if your output shows _ONLY_ the entry with a `.POOL.` as `refid` and a lot of 0s, something is probably wrong, i.e.: - -.. code:: sh - - remote refid st t when poll reach delay offset jitter - ============================================================================== - time.example. .POOL. 16 p - 64 0 0.000 0.000 0.000 - -What should you do if this is the case? Ensure that `ntp` is installed and that the servers in the pool (typically at `/etc/ntp.conf`) are reachable. diff --git a/docs/src/how-to/single-sign-on/adfs/main.md b/docs/src/how-to/single-sign-on/adfs/main.md new file mode 100644 index 0000000000..2e48fd531b --- /dev/null +++ b/docs/src/how-to/single-sign-on/adfs/main.md @@ -0,0 +1,41 @@ +# How to set up SSO integration with ADFS + +This is being used in production by some of our customers, but not +documented. We do have a few out-of-context screenshots, which we +provide here in the hope they may help. + +```{image} fig-00.jpg +``` + +```{image} fig-01.jpg +``` + +```{image} fig-02.jpg +``` + +```{image} fig-03.jpg +``` + +```{image} fig-04.jpg +``` + +```{image} fig-05.jpg +``` + +```{image} fig-06.jpg +``` + +```{image} fig-07.jpg +``` + +```{image} fig-08.jpg +``` + +```{image} fig-09.jpg +``` + +```{image} fig-10.jpg +``` + +```{image} fig-11.jpg +``` diff --git a/docs/src/how-to/single-sign-on/adfs/main.rst b/docs/src/how-to/single-sign-on/adfs/main.rst deleted file mode 100644 index 53155b14d8..0000000000 --- a/docs/src/how-to/single-sign-on/adfs/main.rst +++ /dev/null @@ -1,19 +0,0 @@ -How to set up SSO integration with ADFS -======================================= - -This is being used in production by some of our customers, but not -documented. We do have a few out-of-context screenshots, which we -provide here in the hope they may help. - -.. image:: fig-00.jpg -.. image:: fig-01.jpg -.. image:: fig-02.jpg -.. image:: fig-03.jpg -.. image:: fig-04.jpg -.. image:: fig-05.jpg -.. image:: fig-06.jpg -.. image:: fig-07.jpg -.. image:: fig-08.jpg -.. image:: fig-09.jpg -.. image:: fig-10.jpg -.. image:: fig-11.jpg diff --git a/docs/src/how-to/single-sign-on/azure/main.md b/docs/src/how-to/single-sign-on/azure/main.md new file mode 100644 index 0000000000..dbd0338907 --- /dev/null +++ b/docs/src/how-to/single-sign-on/azure/main.md @@ -0,0 +1,92 @@ +# How to set up SSO integration with Microsoft Azure + +## Preprequisites + +- account, admin access to that account +- See also {ref}`sso-generic-setup`. + +## Steps + +### Azure setup + +Go to , and click on 'Azure Active Directory' +in the menu to your left, then on 'Enterprise Applications': + +```{image} 01.png +``` + +Click on 'New Application': + +```{image} 02.png +``` + +Select 'Non-gallery application': + +```{image} 03.png +``` + +Fill in user-facing app name, then click 'add': + +```{image} 04.png +``` + +The app is now created. If you get lost, you can always get back to +it by selecting its name from the enterprise applications list you've +already visited above. + +Click on 'Configure single sign-on'. + +```{image} 05.png +``` + +Select SAML: + +```{image} 06.png +``` + +On the next page, you find a link to a configuration guide which you +can consult if you have any azure-specific questions. Or you can go +straight to adding the two config parameters you need: + +```{image} 07.png +``` + +Enter for both identity and reply url. Save. + +```{image} 08.png +``` + +Click on 'test later': + +```{image} 09.png +``` + +Finally, you need to assign users to the newly created and configured application: + +```{image} 11.png +``` + +```{image} 12.png +``` + +```{image} 13.png +``` + +```{image} 14.png +``` + +```{image} 15.png +``` + +And that's it! You are now ready to set up your wire team for SAML SSO with the XML metadata file you downloaed above. + +## Further reading + +- technical concepts overview: + : - + - +- how to create an app: + : - +- how to configure SAML2.0 SSO: + : - + - diff --git a/docs/src/how-to/single-sign-on/azure/main.rst b/docs/src/how-to/single-sign-on/azure/main.rst deleted file mode 100644 index 02115a753f..0000000000 --- a/docs/src/how-to/single-sign-on/azure/main.rst +++ /dev/null @@ -1,82 +0,0 @@ -How to set up SSO integration with Microsoft Azure -================================================== - -Preprequisites --------------- - -- http://azure.microsoft.com account, admin access to that account -- See also :ref:`SSO generic setup`. - -Steps ------ - -Azure setup -^^^^^^^^^^^ - -Go to https://portal.azure.com/, and click on 'Azure Active Directory' -in the menu to your left, then on 'Enterprise Applications': - -.. image:: 01.png - -Click on 'New Application': - -.. image:: 02.png - -Select 'Non-gallery application': - -.. image:: 03.png - -Fill in user-facing app name, then click 'add': - -.. image:: 04.png - -The app is now created. If you get lost, you can always get back to -it by selecting its name from the enterprise applications list you've -already visited above. - -Click on 'Configure single sign-on'. - -.. image:: 05.png - -Select SAML: - -.. image:: 06.png - -On the next page, you find a link to a configuration guide which you -can consult if you have any azure-specific questions. Or you can go -straight to adding the two config parameters you need: - -.. image:: 07.png - -Enter https://prod-nginz-https.wire.com/sso/finalize-login for both identity and reply url. Save. - -.. image:: 08.png - -Click on 'test later': - -.. image:: 09.png - -Finally, you need to assign users to the newly created and configured application: - -.. image:: 11.png -.. image:: 12.png -.. image:: 13.png -.. image:: 14.png -.. image:: 15.png - -And that's it! You are now ready to set up your wire team for SAML SSO with the XML metadata file you downloaed above. - - -Further reading ---------------- - -- technical concepts overview: - - https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-saml-protocol-reference - - https://docs.microsoft.com/en-us/azure/active-directory/develop/single-sign-on-saml-protocol - -- how to create an app: - - https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app - -- how to configure SAML2.0 SSO: - - https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/what-is-single-sign-on#saml-sso - - https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-single-sign-on-non-gallery-applications diff --git a/docs/src/how-to/single-sign-on/centrify/main.rst b/docs/src/how-to/single-sign-on/centrify/main.md similarity index 55% rename from docs/src/how-to/single-sign-on/centrify/main.rst rename to docs/src/how-to/single-sign-on/centrify/main.md index 12d6d530fd..ed88ba6668 100644 --- a/docs/src/how-to/single-sign-on/centrify/main.rst +++ b/docs/src/how-to/single-sign-on/centrify/main.md @@ -1,46 +1,48 @@ -How to set up SSO integration with Centrify -=========================================== +# How to set up SSO integration with Centrify -Preprequisites --------------- +## Preprequisites -- http://centrify.com account, admin access to that account -- See also :ref:`SSO generic setup`. +- account, admin access to that account +- See also {ref}`sso-generic-setup`. -Steps ------ +## Steps -Centrify setup -^^^^^^^^^^^^^^ +### Centrify setup - Log in into Centrify web interface - Navigate to "Web Apps" - Click "Add Web Apps" -.. image:: 001.png +```{image} 001.png +``` ----- +______________________________________________________________________ - Create a new custom SAML application -.. image:: 002.png +```{image} 002.png +``` ----- +______________________________________________________________________ - Confirm... -.. image:: 003.png +```{image} 003.png +``` ----- +______________________________________________________________________ - Wait a few moments until the UI has rendered the `Settings` tab of your newly created Web App. - Enter at least a name, plus any other information you want to keep about this new Web App. - Then click on `Save`. -.. image:: 004.png -.. image:: 005.png +```{image} 004.png +``` ----- +```{image} 005.png +``` + +______________________________________________________________________ - Move to the `Trust` tab. This is where the SP metadata (everything centrify wants to know about wire, or Service Provider) and the IdP metadata (everything wire needs to know about centrify, or Identity Provider) can be found. - Enter `https://prod-nginz-https.wire.com/sso/finalize-login` as the SP metadata url. @@ -48,25 +50,33 @@ Centrify setup - You can see the metadata appear in the form below the `Load` button. - Click on `Save`. -.. image:: 006.png +```{image} 006.png +``` ----- +______________________________________________________________________ - Scroll down the `Trust` tab until you find the button to download the IdP metadata. - Store it in a file (eg. `my-wire-idp.xml`). You will need this file to set up your wire team for SSO. -.. image:: 007.png +```{image} 007.png +``` ----- +______________________________________________________________________ - Move to the `Permissions` tab and add at least one user. -.. image:: 008.png -.. image:: 009.png -.. image:: 010.png +```{image} 008.png +``` + +```{image} 009.png +``` + +```{image} 010.png +``` ----- +______________________________________________________________________ - If you see the status `Deployed` in the header of the `Web App` setup page, your users are ready to login. -.. image:: 011.png +```{image} 011.png +``` diff --git a/docs/src/how-to/single-sign-on/generic-setup.md b/docs/src/how-to/single-sign-on/generic-setup.md new file mode 100644 index 0000000000..d455899cca --- /dev/null +++ b/docs/src/how-to/single-sign-on/generic-setup.md @@ -0,0 +1,37 @@ +(sso-generic-setup)= + +# How to set up SSO integration with your IdP + +## Preprequisites + +- An account with your SAML IdP, admin access to that account +- Wire team, admin access to that team +- If your team is hosted at wire.com: + : - Ask customer support to enable the SSO feature flag for you. +- If you are running your own on-prem instance: + : - for handling the feature flag, you can run your own [backoffice](https://github.com/wireapp/wire-server-deploy/tree/259cd2664a4e4d890be797217cc715499d72acfc/charts/backoffice) service. + - More simply, you can configure the galley service so that sso is always enabled (just put "enabled-by-default" [here](https://github.com/wireapp/wire-server-deploy/blob/a4a35b65b2312995729b0fc2a04461508cb12de7/values/wire-server/prod-values.example.yaml#L134)). + +## Setting up your IdP + +- The SP Metadata URL: +- The SSO Login URL: +- SP Entity ID (aka Request Issuer ID): + +How you need to use this information during setting up your IdP +depends on the vendor. Let us know if you run into any trouble! + +## Setting up your wire team + +See + +## Authentication + +The team settings will show you a login code from us that looks like +eg. + +\> `wire-959b5840-3e8a-11e9-adff-0fa5314b31c0` + +See +- +on how to use this to login on wire. diff --git a/docs/src/how-to/single-sign-on/generic-setup.rst b/docs/src/how-to/single-sign-on/generic-setup.rst deleted file mode 100644 index 79f4d9585a..0000000000 --- a/docs/src/how-to/single-sign-on/generic-setup.rst +++ /dev/null @@ -1,42 +0,0 @@ -.. _SSO generic setup: - -How to set up SSO integration with your IdP -=========================================== - -Preprequisites --------------- - -- An account with your SAML IdP, admin access to that account -- Wire team, admin access to that team -- If your team is hosted at wire.com: - - Ask customer support to enable the SSO feature flag for you. -- If you are running your own on-prem instance: - - for handling the feature flag, you can run your own `backoffice `_ service. - - More simply, you can configure the galley service so that sso is always enabled (just put "enabled-by-default" `here `_). - -Setting up your IdP -------------------- - -- The SP Metadata URL: https://prod-nginz-https.wire.com/sso/metadata -- The SSO Login URL: https://prod-nginz-https.wire.com/sso/finalize-login -- SP Entity ID (aka Request Issuer ID): https://prod-nginz-https.wire.com/sso/finalize-login - -How you need to use this information during setting up your IdP -depends on the vendor. Let us know if you run into any trouble! - -Setting up your wire team -------------------------- - -See https://support.wire.com/hc/en-us/articles/360001285638-Set-up-SSO-internally - -Authentication --------------- - -The team settings will show you a login code from us that looks like -eg. - -> `wire-959b5840-3e8a-11e9-adff-0fa5314b31c0` - -See -https://support.wire.com/hc/en-us/articles/360000954617-Pro-How-to-log-in-with-SSO- -on how to use this to login on wire. diff --git a/docs/src/how-to/single-sign-on/index.md b/docs/src/how-to/single-sign-on/index.md new file mode 100644 index 0000000000..2cdb939676 --- /dev/null +++ b/docs/src/how-to/single-sign-on/index.md @@ -0,0 +1,15 @@ +# Single Sign-On and User Provisioning + +```{toctree} +:caption: 'Contents:' +:glob: true +:maxdepth: 1 + +Single sign-on and user provisioning +Generic setup +SSO integration with ADFS +SSO integration with Azure +SSO integration with Centrify +SSO integration with Okta +* +``` diff --git a/docs/src/how-to/single-sign-on/okta/main.rst b/docs/src/how-to/single-sign-on/okta/main.md similarity index 73% rename from docs/src/how-to/single-sign-on/okta/main.rst rename to docs/src/how-to/single-sign-on/okta/main.md index faf3799db0..6fe285c55f 100644 --- a/docs/src/how-to/single-sign-on/okta/main.rst +++ b/docs/src/how-to/single-sign-on/okta/main.md @@ -1,53 +1,56 @@ -How to set up SSO integration with Okta -======================================= +(sso-int-with-okta)= -Preprequisites --------------- +# How to set up SSO integration with Okta -- http://okta.com/ account, admin access to that account -- See also :ref:`SSO generic setup`. +## Preprequisites -Steps ------ +- account, admin access to that account +- See also {ref}`sso-generic-setup`. -Okta setup -~~~~~~~~~~ +## Steps + +### Okta setup - Log in into Okta web interface - Open the admin console and switch to the "Classic UI" - Navigate to "Applications" - Click "Add application" -.. image:: 001-applications-screen.png +```{image} 001-applications-screen.png +``` ----- +______________________________________________________________________ - Create a new application -.. image:: 002-add-application.png +```{image} 002-add-application.png +``` ----- +______________________________________________________________________ - Choose `Web`, `SAML 2.0` -.. image:: 003-add-application-1.png +```{image} 003-add-application-1.png +``` ----- +______________________________________________________________________ - Pick a name for the application in "Step 1" and continue -.. image:: 004-add-application-step1.png +```{image} 004-add-application-step1.png +``` ----- +______________________________________________________________________ - Add the following parameters in "Step 2" and continue +```{eval-rst} +-----------------------------+------------------------------------------------------------------------------+ + Paramenter label | Value | +=============================+==============================================================================+ | Single Sign On URL | `https://prod-nginz-https.wire.com/sso/finalize-login` | +-----------------------------+------------------------------------------------------------------------------+ -| Use this for Recipient URL | checked ✅ | +| Use this for Recipient URL | checked | | and Destination URL | | +-----------------------------+------------------------------------------------------------------------------+ | Audience URI (SP Entity ID) | `https://prod-nginz-https.wire.com/sso/finalize-login` | @@ -56,34 +59,41 @@ Okta setup +-----------------------------+------------------------------------------------------------------------------+ | Application Username | `Email` (\*) | +-----------------------------+------------------------------------------------------------------------------+ +``` **(\*) Note**: The application username **must be** unique in your team, and should be immutable once assigned. If more than one user has the same value for the field that you select here, those two users will log in as a single user on Wire. And if the value were to change, users will be re-assigned to a new account at the next login. Usually, `email` is a safe choice but you should evaluate it for your case. -.. image:: 005-add-application-step2.png +```{image} 005-add-application-step2.png +``` ----- +______________________________________________________________________ - Give the following answer in "Step 3" and continue +```{eval-rst} +-----------------------------------+------------------------------------------------------------------------+ + Paramenter label | Value | +===================================+========================================================================+ | Are you a customer or a partner? | I'm an Okta customer | +-----------------------------------+------------------------------------------------------------------------+ +``` -.. image:: 006-add-application-step3.png +```{image} 006-add-application-step3.png +``` ----- +______________________________________________________________________ - The app has been created. Switch to the "Sign-On" tab - Find the "Identity Provider Metadata" link. Copy the link address (normally done by right-clicking on the link and selecting "Copy link location" or a similar item in the menu). - Store the link address somewhere for a future step. -.. image:: 007-application-sign-on.png +```{image} 007-application-sign-on.png +``` ----- +______________________________________________________________________ - Switch to the "Assignments" tab - Make sure that some users (or everyone) is assigned to the application. These are the users that will be allowed to log in to Wire using Single Sign On. Add the relevant users to the list with the "Assign" button. -.. image:: 008-assignment.png +```{image} 008-assignment.png +``` diff --git a/docs/src/how-to/single-sign-on/trouble-shooting.rst b/docs/src/how-to/single-sign-on/trouble-shooting.md similarity index 60% rename from docs/src/how-to/single-sign-on/trouble-shooting.rst rename to docs/src/how-to/single-sign-on/trouble-shooting.md index da0ca43210..cdc7e1204a 100644 --- a/docs/src/how-to/single-sign-on/trouble-shooting.rst +++ b/docs/src/how-to/single-sign-on/trouble-shooting.md @@ -1,32 +1,28 @@ -.. _trouble-shooting-faq: +(trouble-shooting-faq)= -Trouble shooting & FAQ -====================== +# Trouble shooting & FAQ -Reporting a problem with user provisioning or SSO authentication ----------------------------------------------------------------- +## Reporting a problem with user provisioning or SSO authentication In order for us to analyse and understand your problem, we need at least the following information up-front: - Have you followed the following instructions? - - :ref:`FAQ ` (This document) - - `Howtos `_ for supported vendors - - `General documentation on the setup flow `_ + : - {ref}`FAQ ` (This document) + - [Howtos](https://docs.wire.com/how-to/single-sign-on/index.html) for supported vendors + - [General documentation on the setup flow](https://support.wire.com/hc/en-us/articles/360001285718-Set-up-SSO-externally) - Vendor information (octa, azure, centrica, other (which one)?) - Team ID (looks like eg. `2e9a9c9c-6f83-11eb-a118-3342c6f16f4e`, can be found in team settings) - What do you expect to happen? - - eg.: "I enter login code, authenticate successfully against IdP, get redirected, and see the wire landing page." + : - eg.: "I enter login code, authenticate successfully against IdP, get redirected, and see the wire landing page." - What does happen instead? - - Screenshots + : - Screenshots - Copy the text into your report where applicable in addition to screenshots (for automatic processing). - eg.: "instead of being logged into wire, I see the following error page: ..." - Screenshots of the Configuration (both SAML and SCIM, as applicable), including, but not limited to: - - If you are using SAML: SAML IdP metadata file + : - If you are using SAML: SAML IdP metadata file - If you are using SCIM for provisioning: Which attributes in the User schema are mapped? How? - -Can I use the same SSO login code for multiple teams? ------------------------------------------------------ +## Can I use the same SSO login code for multiple teams? No, but there is a good reason for it and a work-around. @@ -45,28 +41,21 @@ still use the same user base for all teams. This has the extra advantage that a user can be part of two teams with the same credentials, which would be impossible even with the hypothetical fix. - -Can an existing user without IdP (or with a different IdP) be bound to a new IdP? ---------------------------------------------------------------------------------- +## Can an existing user without IdP (or with a different IdP) be bound to a new IdP? No. This is a feature we never fully implemented. Details / latest -updates: https://github.com/wireapp/wire-server/issues/1151 - - -Can the SSO feature be disabled for a team? -------------------------------------------- +updates: -No, this is `not implemented `_. +## Can the SSO feature be disabled for a team? +No, this is [not implemented](https://github.com/wireapp/wire-server/blob/7a97cb5a944ae593c729341b6f28dfa1dabc28e5/services/galley/src/Galley/API/Error.hs#L215). -Can you remove a SAML connection? ---------------------------------- +## Can you remove a SAML connection? It is not possible to delete a SAML connection in the Team Settings app, however it can be overwritten with a new connection. -It is possible do delete a SAML connection directly via the API endpoint ``DELETE /identity-providers/{id}``. However deleting a SAML connection also requires deleting all users that can log in with this SAML connection. To prevent accidental deletion of users this functionality is not available directly from Team Settings. +It is possible do delete a SAML connection directly via the API endpoint `DELETE /identity-providers/{id}`. However deleting a SAML connection also requires deleting all users that can log in with this SAML connection. To prevent accidental deletion of users this functionality is not available directly from Team Settings. -If you get an error when returning from your IdP ------------------------------------------------- +## If you get an error when returning from your IdP `Symptoms:` @@ -86,9 +75,7 @@ that contains a lot of machine-readable info. With all this information, please get in touch with our customer support. - -Do I need any firewall settings? --------------------------------- +## Do I need any firewall settings? No. @@ -96,9 +83,7 @@ There is nothing to be done here. There is no internet traffic between your SAML IdP and the wire service. All communication happens via the browser or app. - -Why does the team owner have to keep using password? ----------------------------------------------------- +## Why does the team owner have to keep using password? The user who creates the team cannot be authenticated via SSO. There is fundamentally no easy way around that: we need somebody to give us @@ -119,71 +104,66 @@ for IdP registration and upgrade of IdP-authenticated owners / admins. In practice, user A and some owner authenticated via IdP would then be controlled by the same person, probably. - -What should the SAML response look like? ----------------------------------------- +## What should the SAML response look like? Here is an example that works. Much of this beyond the subject's NameID is required by the SAML standard. If you can find a more minimal example that still works, we'd be love to take a look. -.. code:: xml - - - ... - - - - - - - - - - - - - ... - - - ... - - - ... - - - - - ... - - - - - - - https://prod-nginz-https.wire.com/sso/finalize-login - - - - - - urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport - - - - - -Why does the auth response not contain a reference to an auth request? (Also: can i use IdP-initiated login?) ------------------------------------------------------------------------------------------------------------------ +```xml + + ... + + + + + + + + + + + + + ... + + + ... + + + ... + + + + + ... + + + + + + + https://prod-nginz-https.wire.com/sso/finalize-login + + + + + + urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport + + + +``` + +## Why does the auth response not contain a reference to an auth request? (Also: can i use IdP-initiated login?) tl;dr: Wire only supports SP-initiated login, where the user selects the auth method from inside the app's login screen. It does not support IdP-initiated login, where the user enters the app from a list of applications in the IdP UI. -The full story -^^^^^^^^^^^^^^ +### The full story SAML authentication can be initiated by the IdP (eg., Okta or Azure), or by the SP (Wire). @@ -206,9 +186,7 @@ impersonate rogue accounts) hard that were otherwise quite feasible. Wire therefore only supports SP-initiated login. - -How are SAML2 assertion details used in wire? ---------------------------------------------- +## How are SAML2 assertion details used in wire? Wire only uses the SAML `NameID` from the assertion, plus the information whether authentication and authorization was successful. @@ -221,9 +199,7 @@ wire user display name a default value. (The user will be allowed to change that value later; changing it does NOT affect the authentication handshake between wire and the IdP.) - -How should I map user data to SCIM attributes when provisioning users via SCIM? -------------------------------------------------------------------------------- +## How should I map user data to SCIM attributes when provisioning users via SCIM? If you are provisioning users via SCIM, the following mapping is used in your wire team: @@ -238,17 +214,16 @@ in your wire team: 3. SCIM's `preferredLanguage` is mapped to wire's user locale settings when a locale is not defined for that user. It must consist of an - ISO 639-1 language code. + ISO 639-1 language code. 4. SCIM's `externalId`: - a. If SAML SSO is used, it is mapped on the SAML `NameID`. If it + 1. If SAML SSO is used, it is mapped on the SAML `NameID`. If it parses as an email, it will have format `email`, and you can choose to validate it during provisioning (by enabeling the feature flag for your team). Otherwise, the format will be `unspecified`. - - b. If email/password authentication is used, SCIM's `externalId` is + 2. If email/password authentication is used, SCIM's `externalId` is mapped on wire's email address, and provisioning works like in team settings with invitation emails. @@ -262,29 +237,24 @@ Also note that the account will be set to `"active": false` until the user has accepted the invitation and activated the account. Please contact customer support if this causes any issues. - -Can I distribute a URL to my users that contains the login code? ----------------------------------------------------------------- +## Can I distribute a URL to my users that contains the login code? Users may find it awkward to copy and paste the login code into the form. If they are using the webapp, an alternative is to give them the following URL (fill in the login code that you can find in your team settings): -.. code:: bash +```bash +https://wire-webapp-dev.zinfra.io/auth#sso/3c4f050a-f073-11eb-b4c9-931bceeed13e +``` - https://wire-webapp-dev.zinfra.io/auth#sso/3c4f050a-f073-11eb-b4c9-931bceeed13e - - -(Theoretical) name clashes in SAML NameIDs ------------------------------------------- +## (Theoretical) name clashes in SAML NameIDs You can technically configure your SAML IdP to create name clashes in wire, ie., to map two (technically) different NameIDs to the same wire user. -How to know you're safe -^^^^^^^^^^^^^^^^^^^^^^^ +### How to know you're safe This is highly unlikely, since the distinguishing parts of `NameID` that we ignore are generally either @@ -292,16 +262,14 @@ unused or redundant. If you are confident that any two users you have assigned to the wire app can be distinguished solely by the lower-cased `NameID` content, you're safe. -Impact -^^^^^^ +### Impact If you are using SCIM for user provisioning, this may lead to errors during provisioning of new users ("user already exists"). If you use SAML auto-provisioning, this may lead to unintential account sharing instead of an error. -How to reproduce -^^^^^^^^^^^^^^^^ +### How to reproduce If you have users whose combination of `IssuerId` and `NameID` can only be distinguished by casing (upper @@ -309,30 +277,27 @@ vs. lower) or by the `NameID` qualifiers (`NameID` xml attributes `NameQualifier`, `IdPNameQualifier`, ...), those users will name clash. -Solution -^^^^^^^^ +### Solution Do not rely on case sensitivity of `IssuerID` or `NameID`, or on `NameID` qualifiers for distinguishing user identifiers. - -How to report problems ----------------------- +## How to report problems If you have a problem you cannot resolve by yourself, please get in touch. Add as much of the following details to your report as possible: -* Are you on cloud or on-prem? (If on-prem: which instance?) -* XML IdP metadata -* SSL Login code or IdP Issuer EntityID -* NameID of the account that has the problem -* SP metadata +- Are you on cloud or on-prem? (If on-prem: which instance?) +- XML IdP metadata +- SSL Login code or IdP Issuer EntityID +- NameID of the account that has the problem +- SP metadata Problem description, including, but not limited to: -* what happened? -* what did you want to happen? -* what does your idp config in the wire team management app look like? -* what does your wire config in your IdP management app look like? -* Please include screenshots *and* copied text (for cut&paste when we investigate) *and* further description and comments where feasible. +- what happened? +- what did you want to happen? +- what does your idp config in the wire team management app look like? +- what does your wire config in your IdP management app look like? +- Please include screenshots *and* copied text (for cut&paste when we investigate) *and* further description and comments where feasible. (If you can't produce some of this information of course please get in touch anyway! It'll merely be harder for us to resolve your issue quickly, and we may need to make a few extra rounds of data gathering together with you.) diff --git a/docs/src/understand/single-sign-on/Wire_SAML_Flow (lucidchart).svg b/docs/src/how-to/single-sign-on/understand/Wire_SAML_Flow (lucidchart).svg similarity index 100% rename from docs/src/understand/single-sign-on/Wire_SAML_Flow (lucidchart).svg rename to docs/src/how-to/single-sign-on/understand/Wire_SAML_Flow (lucidchart).svg diff --git a/docs/src/understand/single-sign-on/Wire_SAML_Flow.png b/docs/src/how-to/single-sign-on/understand/Wire_SAML_Flow.png similarity index 100% rename from docs/src/understand/single-sign-on/Wire_SAML_Flow.png rename to docs/src/how-to/single-sign-on/understand/Wire_SAML_Flow.png diff --git a/docs/src/how-to/single-sign-on/understand/main.md b/docs/src/how-to/single-sign-on/understand/main.md new file mode 100644 index 0000000000..465ef9dc48 --- /dev/null +++ b/docs/src/how-to/single-sign-on/understand/main.md @@ -0,0 +1,561 @@ +# Single sign-on and user provisioning + +```{contents} +``` + +## Introduction + +This page is intended as a manual for administrator users in need of setting up {term}`SSO` and provisionning users using {term}`SCIM` on their installation of Wire. + +Historically and by default, Wire's user authentication method is via phone or password. This has security implications and does not scale. + +Solution: {term}`SSO` with {term}`SAML`! [(Security Assertion Markup Language)](https://en.wikipedia.org/wiki/Security_Assertion_Markup_Language) + +{term}`SSO` systems allow users to identify on multiple systems (including Wire once configured as such) using a single ID and password. + +You can find some of the advantages of {term}`SSO` over more traditional schemes [here](https://en.wikipedia.org/wiki/Single_sign-on). + +Also historically, wire has allowed team admins and owners to manage their users in the team management app. + +This does not scale as it requires a lot of manual labor for each user. + +The solution we offer to solve this issue is implementing {term}`SCIM` [(System for Cross-domain Identity Management)](https://en.wikipedia.org/wiki/System_for_Cross-domain_Identity_Management) + +{term}`SCIM` is an interface that allows both software (for example Active Directory) and custom scripts to manage Identities (users) in bulk. + +This page explains how to set up {term}`SCIM` and then use it. + +```{note} +Note that it is recommended to use both {term}`SSO` and {term}`SCIM` (as opposed to just {term}`SSO` alone). +The reason is if you only use {term}`SSO`, but do not configure/implement {term}`SCIM`, you will experience reduced functionality. +In particular, without {term}`SCIM` all Wire users will be named according their e-mail address and won't have any rich profiles. +See below in the {term}`SCIM` section for a more detailled explanation. +``` + +## Further reading + +If you can't find the answers to your questions here, we have a few +more documents. Some of them are very technical, some may not be up +to date any more, and we are planning to move many of them into this +page. But for now they may be worth checking out. + +- {ref}`Trouble shooting & FAQ ` +- +- +- + +## Definitions + +The following concepts need to be understood to use the present manual: + +```{eval-rst} +.. glossary:: + + SCIM + System for Cross-domain Identity Management (:term:`SCIM`) is a standard for automating the exchange of user identity information between identity domains, or IT systems. + + One example might be that as a company onboards new employees and separates from existing employees, they are added and removed from the company's electronic employee directory. :term:`SCIM` could be used to automatically add/delete (or, provision/de-provision) accounts for those users in external systems such as G Suite, Office 365, or Salesforce.com. Then, a new user account would exist in the external systems for each new employee, and the user accounts for former employees might no longer exist in those systems. + + See: `System for Cross-domain Identity Management at Wikipedia `_ + + In the context of Wire, SCIM is the interface offered by the Wire service (in particular the spar service) that allows for single or mass automated addition/removal of user accounts. + + SSO + + Single sign-on (:term:`SSO`) is an authentication scheme that allows a user to log in with a single ID and password to any of several organizationally related, yet independent, software systems. + + True single sign-on allows the user to log in once and access different, independent services without re-entering authentication factors. + + See: `Single-Sign-On at Wikipedia `_ + + SAML + + Security Assertion Markup Language (:term:`SAML`, pronounced SAM-el, /'sæməl/) is an open standard for exchanging authentication and authorization data between parties, in particular, between an identity provider and a service provider. :term:`SAML` is an XML-based markup language for security assertions (statements that service providers use to make access-control decisions). :term:`SAML` is also: + + * A set of XML-based protocol messages + * A set of protocol message bindings + * A set of profiles (utilizing all of the above) + + An important use case that :term:`SAML` addresses is web-browser `single sign-on (SSO) `_ . Single sign-on is relatively easy to accomplish within a security domain (using cookies, for example) but extending :term:`SSO` across security domains is more difficult and resulted in the proliferation of non-interoperable proprietary technologies. The `SAML Web Browser SSO `_ profile was specified and standardized to promote interoperability. + + See: `SAML at Wikipedia `_ + + In the context of Wire, SAML is the standard/protocol used by the Wire services (in particular the spar service) to provide the Single Sign On feature. + + IdP + + In the context of Wire, an identity provider (abbreviated :term:`IdP`) is a service that provides SAML single sign-on (:term:`SSO`) credentials that give users access to Wire. + + Curl + + :term:`Curl` (pronounced ":term:`Curl`") is a command line tool used to download files over the HTTP (web) protocol. For example, `curl http://wire.com` will download the ``wire.com`` web page. + + In this manual, it is used to contact API (Application Programming Interface) endpoints manually, where those endpoints would normally be accessed by code or other software. + + This can be used either for illustrative purposes (to "show" how the endpoints can be used) or to allow the manual execution of some simple tasks. + + For example (not a real endpoint) `curl http://api.wire.com/delete_user/thomas` would (schematically) execute the :term:`Curl` command, which would contact the wire.com API and delete the user named "thomas". + + Running this command in a terminal would cause the :term:`Curl` command to access this URL, and the API at that URL would execute the requested action. + + See: `curl at Wikipedia `__ + + + Spar + + The Wire backend software stack is composed of different services, `running as pods <../overview.html#focus-on-pods>`__ in a kubernetes cluster. + + One of those pods is the "spar" service. That service/pod is dedicated to the providing :term:`SSO` (using :term:`SAML`) and :term:`SCIM` services. This page is the manual for this service. + + In the context of :term:`SCIM`, Wire's spar service is the `Service Provider `__ that Identity Management Software + (for example Azure, Okta, Ping Identity, SailPoint, Technology Nexus, etc.) uses for user account provisioning and deprovisioning. +``` + +## User login for the first time with SSO + +{term}`SSO` allows users to register and log into Wire with their company credentials that they use on other software in their workplace. +No need to remember another password. + +When a team is set up on Wire, the administrators can provide users a login code or link that they can use to go straight to their company's login page. + +Here is what this looks from a user's perspective: + +1. Download Wire. +2. Select and copy the code that your company gave you / the administrator generated +3. Open Wire. Wire may detect the code on your clipboard and open a pop-up window with a text field. + Wire will automatically put the code into the text field. + If so, click Log in and go to step 8. +4. If no pop-up: click Login on the first screen. +5. Click Enterprise Login. +6. A pop-up will appear. In the text field, paste or type the code your company gave you. +7. Click Log in. +8. Wire will load your company's login page: log in with your company credentials. + +(saml-sso)= + +## SAML/SSO + +### Introduction + +SSO (Single Sign-On) is technology allowing users to sign into multiple services with a single identity provider/credential. + +SSO is about `authentication`, not `provisioning` (create, update, remove user accounts). To learn more about the latter, continue {ref}`below `. + +For example, if a company already has SSO setup for some of their services, and they start using Wire, they can use Wire's SSO support to add Wire to the set of services their users will be able to sign into with their existing SSO credentials. + +Here is a blog post we like about how SAML works: + +And here is a diagram that explains it in slightly more technical terms: + +```{image} Wire_SAML_Flow.png +``` + +Here is a critique of XML/DSig security (which SAML relies on): + +### Terminology and concepts + +- End + The browser carrries out all the redirections from the SP to the IdP and vice versa. +- Service Provider (SP): The entity (here Wire software) that provides its protected resource when an end user tries to access this resource. To accomplish the SAML based SSO authentication, the Service Provider + must have the Identity Provider's metadata. +- Identity Provider (IdP): Defines the entity that provides the user identities, including the ability to authenticate a user to get access to a protected resource / application from a Service Provider. To accomplish + the SAML based SSO authentication, the IdP must have the Service Provider's metadata. +- SAML Request: This is the authentication request generated by the Service Provider to request an authentication from the Identity Provider for verifying the user's identity. +- SAML Response: The SAML Response contains the cryptographically signed assertion of the authenticated user and is generated by the Identity Provider. + +(Definitons adapted from [collab.net](http://help.collab.net/index.jsp?topic=/teamforge178/action/saml.html)) + +(setting-up-sso-externally)= + +### Setting up SSO externally + +To set up {term}`SSO` for a given Wire installation, the Team owner/administrator must enable it. + +The first step is to configure the Identity Provider: you'll need to register Wire as a service provider in your Identity Provider. + +We've put together guides for registering with different providers: + +- Instructions for {ref}`Okta ` +- Instructions for {doc}`Centrify <../centrify/main>` +- Instructions for {doc}`Azure <../azure/main>` +- Some screenshots for {doc}`ADFS <../adfs/main>` +- {doc}`Generic instructions (try this if none of the above are applicable) <../generic-setup>` + +As you do this, make sure you take note of your {term}`IdP` metadata, which you will need for the next step. + +Once you are finished with registering Wire to your {term}`IdP`, move on to the next step, setting up {term}`SSO` internally. + +### Setting up SSO internally + +Now that you've registered Wire with your identity provider ({term}`IdP`), you can enable {term}`SSO` for your team on Wire. + +On Desktop: + +- Click Settings and click "Manage Team"; or go directly to teams.wire.com, or if you have an on-premise install, go to teams.\.com +- Login with your account credentials. +- Click "Customization". Here you will see the section for {term}`SSO`. +- Click the blue down arrow. +- Click "Add {term}`SAML` Connection". +- Provide the {term}`IdP` metadata. To find out more about retrieving this for your provider, see the guides in the "Setting up {term}`SSO` externally" step just above. +- Click "Save". +- Wire will now validate the document to set up the {term}`SAML` connection. +- If the data is valid, you will return to the Settings page. +- The page shows the information you need to log in with {term}`SSO`. Copy the login code or URL and send it to your team members or partners. For more information see: Logging in with {term}`SSO`. + +What to expect after {term}`SSO` is enabled: + +Anyone with a login through your {term}`SAML` identity provider ({term}`IdP`) and with access to the Wire app will be able to register and log in to your team using the {term}`SSO` Login URL and/or Code. + +Take care to share the code only with members of your team. + +If you haven't set up {term}`SCIM` ([we recommend you do](#introduction)), your team members can create accounts on Wire using {term}`SSO` simply by logging in, and will appear on the People tab of the team management page. + +If team members already have Wire accounts, use {term}`SCIM` to associate them with the {term}`SAML` credentials. If you make a mistake here, you may end up with several accounts for the same person. + +(user-provisioning-scim-ldap)= + +## User provisioning (SCIM/LDAP) + +SCIM/LDAP is about `provisioning` (create, update, remove user accounts), not `authentication`. To learn more about the latter, continue {ref}`above `. + +Wire supports the [SCIM](http://www.simplecloud.info/) ([RFC 7643](https://tools.ietf.org/html/rfc7643)) protocol to create, update and delete users. + +If your user data is stored in an LDAP data source like Active Directory or OpenLDAP, you can use our docker-base [ldap-scim-bridge](https://github.com/wireapp/ldap-scim-bridge/#use-via-docker) to connect it to wire. + +Note that connecting a SCIM client to Wire also disables the functionality to create new users in the SSO login process. This functionality is disabled when a token is created (see below) and re-enabled when all tokens have been deleted. + +To set up the connection of your SCIM client (e.g. Azure Active Directory) you need to provide + +1. The URL under which Wire's SCIM API is hosted: `https://prod-nginz-https.wire.com/scim/v2`. + If you are hosting your own instance of Wire then the URL is `https:///scim/v2`, where `` is where you are serving Wire's public endpoints. Some SCIM clients append `/v2` to the URL your provide. If this happens (check the URL mentioned in error messages of your SCIM client) then please provide the URL without the `/v2` suffix, i.e. `https://prod-nginz-https.wire.com/scim` or `https:///scim`. +2. A secret token which authorizes the use of the SCIM API. Use the [wire_scim_token.py](https://raw.githubusercontent.com/wireapp/wire-server/654b62e3be74d9dddae479178990ebbd4bc77b1e/docs/reference/provisioning/wire_scim_token.py) + script to generate a token. To run the script you need access to an user account with "admin" privileges that can login via email and password. Note that the token is independent from the admin account that created it, i.e. the token remains valid if the admin account gets deleted or changed. + +You need to configure your SCIM client to use the following mandatory SCIM attributes: + +1. Set the `userName` attribute to the desired user handle (the handle is shown + with an @ prefix in apps). It must be unique accross the entire Wire Cloud + (or unique on your own instance), and consist of the characters `a-z0-9_.-` + (no capital letters). + +2. Set the `displayName` attribute to the user's desired display name, e.g. "Jane Doe". + It must consist of 1-128 unicode characters. It does not need to be unique. + +3. The `externalId` attribute: + + 1. If you are using Wire's SAML SSO feature then set `externalId` attribute to the same identifier used for `NameID` in your SAML configuration. + 2. If you are using email/password authentication then set the `externalId` + attribute to the user's email address. The user will receive an invitation email during provisioning. Also note that the account will be set to `"active": false` until the user has accepted the invitation and activated the account. + +You can optionally make use of Wire's `urn:wire:scim:schemas:profile:1.0` extension field to store arbitrary user profile data that is shown in the users profile, e.g. department, role. See [docs](https://github.com/wireapp/wire-server/blob/develop/docs/reference/user/rich-info.md#scim-support-refrichinfoscim) for details. + +### SCIM management in Wire (in Team Management) + +#### SCIM security and authentication + +Wire uses a very basic variant of oauth, where a *bearer token* is presented to the server in header with all {term}`SCIM` requests. + +You can create such bearer tokens in team management and copy them from there into your the dashboard of your SCIM data source. + +#### Generating a SCIM token + +In order to be able to send SCIM requests to Wire, we first need to generate a SCIM token. This section explains how to do this. + +Once the token is generated, it should be noted/remembered, and it will be used in all subsequent SCIM uses/requests to authenticate the request as valid/authenticated. + +These are the steps to generate a new {term}`SCIM` token, which you will need to provide to your identity provider ({term}`IdP`), along with the target API URL, to enable {term}`SCIM` provisionning. + +- Step 1: Go to (Here replace "wire.com" with your own domain if you have an on-premise installation of Wire). + +```{image} token-step-01.png +:align: center +``` + +- Step 2: In the left menu, go to "Customization". + +```{image} token-step-02.png +:align: center +``` + +- Step 3: Go to "Automated User Management ({term}`SCIM`)" and click the "down" to expand + +```{image} token-step-03.png +:align: center +``` + +- Step 4: Click "Generate token", if your password is requested, enter it. + +```{image} token-step-04.png +:align: center +``` + +- Step 5: Once the token is generated, copy it into your clipboard and store it somewhere safe (eg., in the dashboard of your SCIM data source). + +```{image} token-step-05.png +:align: center +``` + +- Step 6: You're done! You can now view token information, delete the token, or create more tokens should you need them. + +```{image} token-step-06.png +:align: center +``` + +Tokens are now listed in this {term}`SCIM`-related area of the screen, you can generate up to 8 such tokens. + +### Using SCIM via Curl + +You can use the term:`Curl` command line HTTP tool to access tho wire backend (in particular the `spar` service) through the {term}`SCIM` API. + +This can be helpful to write your own tooling to interface with wire. + +#### Creating a SCIM token + +Before we can send commands to the {term}`SCIM` API/Spar service, we need to be authenticated. This is done through the creation of a {term}`SCIM` token. + +First, we need a little shell environment. Run the following in your terminal/shell: + +```{code-block} bash +:linenos: true + + export WIRE_BACKEND=https://prod-nginz-https.wire.com + export WIRE_ADMIN=... + export WIRE_PASSWD=... +``` + +Wire's SCIM API currently supports a variant of HTTP basic auth. + +In order to create a token in your team, you need to authenticate using your team admin credentials. + +The way this works behind the scenes in your browser or cell phone, and in plain sight if you want to use curl, is you need to get a Wire token. + +First install the `jq` command (): + +```bash +sudo apt install jq +``` + +```{note} +If you don't want to install `jq`, you can just call the `curl` command and copy the access token into the shell variable manually. +``` + +Then run: + +```{code-block} bash +:linenos: true + +export BEARER=$(curl -X POST \ +--header 'Content-Type: application/json' \ +--header 'Accept: application/json' \ +-d '{"email":"'"$WIRE_ADMIN"'","password":"'"$WIRE_PASSWD"'"}' \ +$WIRE_BACKEND/login'?persist=false' | jq -r .access_token) +``` + +This token will be good for 15 minutes; after that, just repeat the command above to get a new token. + +```{note} +SCIM requests are authenticated with a SCIM token, see below. SCIM tokens and Wire tokens are different things. + +A Wire token is necessary to get a SCIM token. SCIM tokens do not expire, but need to be deleted explicitly. +``` + +You can test that you are logged in with the following command: + +```bash +curl -X GET --header "Authorization: Bearer $BEARER" $WIRE_BACKEND/self +``` + +Now you are ready to create a SCIM token: + +```{code-block} bash +:linenos: true + +export SCIM_TOKEN_FULL=$(curl -X POST \ +--header "Authorization: Bearer $BEARER" \ +--header 'Content-Type: application/json;charset=utf-8' \ +-d '{ "description": "test '"`date`"'", "password": "'"$WIRE_PASSWD"'" }' \ +$WIRE_BACKEND/scim/auth-tokens) +export SCIM_TOKEN=$(echo $SCIM_TOKEN_FULL | jq -r .token) +export SCIM_TOKEN_ID=$(echo $SCIM_TOKEN_FULL | jq -r .info.id) +``` + +The SCIM token is now contained in the `SCIM_TOKEN` environment variable. + +You can look it up again with: + +```{code-block} bash +:linenos: true + +curl -X GET --header "Authorization: Bearer $BEARER" \ +$WIRE_BACKEND/scim/auth-tokens +``` + +And you can delete it with: + +```{code-block} bash +:linenos: true + +curl -X DELETE --header "Authorization: Bearer $BEARER" \ +$WIRE_BACKEND/scim/auth-tokens?id=$SCIM_TOKEN_ID +``` + +#### Using a SCIM token to Create Read Update and Delete (CRUD) users + +Now that you have your SCIM token, you can use it to talk to the SCIM API to manipulate (create, read, update, delete) users, either individually or in bulk. + +**JSON encoding of SCIM Users** + +In order to manipulate users using commands, you need to specify user data. + +A minimal definition of a user is written in JSON format and looks like this: + +```{code-block} json +:linenos: true + +{ + "schemas" : ["urn:ietf:params:scim:schemas:core:2.0:User"], + "externalId" : "nick@example.com", + "userName" : "nick", + "displayName" : "The Nick" +} +``` + +You can store it in a variable using this sort of command: + +```{code-block} bash +:linenos: true + +export SCIM_USER='{ + "schemas" : ["urn:ietf:params:scim:schemas:core:2.0:User"], + "externalId" : "nick@example.com", + "userName" : "nick", + "displayName" : "The Nick" +}' +``` + +The `externalId` is used to construct a SAML identity. Two cases are +currently supported: + +1. `externalId` contains a valid email address. + The SAML `NameID` has the form `me@example.com`. +2. `externalId` contains anything that is *not* an email address. + The SAML `NameID` has the form `...`. + +```{note} +It is important to configure your SAML provider to use `nameid-format:emailAddress` or `nameid-format:unspecified`. Other nameid formats are not supported at this moment. + +See [FAQ](https://docs.wire.com/how-to/single-sign-on/trouble-shooting.html#how-should-i-map-user-data-to-scim-attributes-when-provisioning-users-via-scim) +``` + +We also support custom fields that are used in rich profiles in this form (see: ): + +```{code-block} bash +:linenos: true + + export SCIM_USER='{ + "schemas" : ["urn:ietf:params:scim:schemas:core:2.0:User", "urn:wire:scim:schemas:profile:1.0"], + "externalId" : "rnick@example.com", + "userName" : "rnick", + "displayName" : "The Rich Nick", + "urn:wire:scim:schemas:profile:1.0": { + "richInfo": [ + { + "type": "Department", + "value": "Sales & Marketing" + }, + { + "type": "Favorite color", + "value": "Blue" + } + ] + } + }' +``` + +**How to create a user** + +You can create a user using the following command: + +```{code-block} bash +:linenos: true + + export STORED_USER=$(curl -X POST \ + --header "Authorization: Bearer $SCIM_TOKEN" \ + --header 'Content-Type: application/json;charset=utf-8' \ + -d "$SCIM_USER" \ + $WIRE_BACKEND/scim/v2/Users) + export STORED_USER_ID=$(echo $STORED_USER | jq -r .id) +``` + +Note that `$SCIM_USER` is in the JSON format and is declared before running this commend as described in the section above. + +**Get a specific user** + +```{code-block} bash +:linenos: true + + curl -X GET \ + --header "Authorization: Bearer $SCIM_TOKEN" \ + --header 'Content-Type: application/json;charset=utf-8' \ + $WIRE_BACKEND/scim/v2/Users/$STORED_USER_ID +``` + +**Search a specific user** + +SCIM user search is quite flexible. Wire currently only supports lookup by wire handle or email address. + +Email address (and/or SAML NameID, if /a): + +```{code-block} bash +:linenos: true + + curl -X GET \ + --header "Authorization: Bearer $SCIM_TOKEN" \ + --header 'Content-Type: application/json;charset=utf-8' \ + $WIRE_BACKEND/scim/v2/Users/'?filter=externalId%20eq%20%22me%40example.com%22' +``` + +Wire handle: same request, just replace the query part with + +```bash +'?filter=userName%20eq%20%22me%22' +``` + +**Update a specific user** + +For each put request, you need to provide the full json object. All omitted fields will be set to `null`. (If you do not have an up-to-date user present, just `GET` one right before the `PUT`.) + +```{code-block} bash +:linenos: true + + export SCIM_USER='{ + "schemas" : ["urn:ietf:params:scim:schemas:core:2.0:User"], + "externalId" : "rnick@example.com", + "userName" : "newnick", + "displayName" : "The New Nick" + }' +``` + +```{code-block} bash +:linenos: true + + curl -X PUT \ + --header "Authorization: Bearer $SCIM_TOKEN" \ + --header 'Content-Type: application/json;charset=utf-8' \ + -d "$SCIM_USER" \ + $WIRE_BACKEND/scim/v2/Users/$STORED_USER_ID +``` + +**Deactivate user** + +It is possible to temporarily deactivate an user (and reactivate him later) by setting his `active` property to `true/false` without affecting his device history. (`active=false` changes the wire user status to `suspended`.) + +**Delete user** + +```{code-block} bash +:linenos: true + + curl -X DELETE \ + --header "Authorization: Bearer $SCIM_TOKEN" \ + $WIRE_BACKEND/scim/v2/Users/$STORED_USER_ID +``` diff --git a/docs/src/understand/single-sign-on/token-step-01.png b/docs/src/how-to/single-sign-on/understand/token-step-01.png similarity index 100% rename from docs/src/understand/single-sign-on/token-step-01.png rename to docs/src/how-to/single-sign-on/understand/token-step-01.png diff --git a/docs/src/understand/single-sign-on/token-step-02.png b/docs/src/how-to/single-sign-on/understand/token-step-02.png similarity index 100% rename from docs/src/understand/single-sign-on/token-step-02.png rename to docs/src/how-to/single-sign-on/understand/token-step-02.png diff --git a/docs/src/understand/single-sign-on/token-step-03.png b/docs/src/how-to/single-sign-on/understand/token-step-03.png similarity index 100% rename from docs/src/understand/single-sign-on/token-step-03.png rename to docs/src/how-to/single-sign-on/understand/token-step-03.png diff --git a/docs/src/understand/single-sign-on/token-step-04.png b/docs/src/how-to/single-sign-on/understand/token-step-04.png similarity index 100% rename from docs/src/understand/single-sign-on/token-step-04.png rename to docs/src/how-to/single-sign-on/understand/token-step-04.png diff --git a/docs/src/understand/single-sign-on/token-step-05.png b/docs/src/how-to/single-sign-on/understand/token-step-05.png similarity index 100% rename from docs/src/understand/single-sign-on/token-step-05.png rename to docs/src/how-to/single-sign-on/understand/token-step-05.png diff --git a/docs/src/understand/single-sign-on/token-step-06.png b/docs/src/how-to/single-sign-on/understand/token-step-06.png similarity index 100% rename from docs/src/understand/single-sign-on/token-step-06.png rename to docs/src/how-to/single-sign-on/understand/token-step-06.png diff --git a/docs/src/index.md b/docs/src/index.md new file mode 100644 index 0000000000..135a0aa03f --- /dev/null +++ b/docs/src/index.md @@ -0,0 +1,50 @@ +% Wire documentation master file, created by +% sphinx-quickstart on Thu Jul 18 13:44:11 2019. +% You can adapt this file completely to your liking, but it should at least +% contain the root `toctree` directive. + +# Welcome to Wire's documentation! + +If you are a Wire end-user, please check out our [support pages](https://support.wire.com/). + +The targeted audience of this documentation is: + +- the curious power-user (people who want to understand how the server components of Wire work) +- on-premise operators/administrators (people who want to self-host Wire-Server on their own datacentres or cloud) +- developers (people who are working with the wire-server source code) + +If you are a developer, you may want to check out the "Notes for developers" first. + +This documentation may be expanded in the future to cover other aspects of Wire. + +```{toctree} +:caption: 'Contents:' +:glob: true +:maxdepth: 1 + +Release notes + +Installation +Administration +Connecting Wire Clients +Optional Configuration +Understanding wire-server components +Single-Sign-On and user provisioning +Client API documentation +Security responses +Notes for developers +``` + +% Overview + +% commented out for now... + +% Indices and tables + +% ================== + +% * :ref:`genindex` + +% * :ref:`modindex` + +% * :ref:`search` diff --git a/docs/src/index.rst b/docs/src/index.rst deleted file mode 100644 index 28721d822a..0000000000 --- a/docs/src/index.rst +++ /dev/null @@ -1,43 +0,0 @@ -.. Wire documentation master file, created by - sphinx-quickstart on Thu Jul 18 13:44:11 2019. - You can adapt this file completely to your liking, but it should at least - contain the root `toctree` directive. - -Welcome to Wire's documentation! -=============================================== - -If you are a Wire end-user, please check out our `support pages `_. - -The targeted audience of this documentation is: - -* the curious power-user (people who want to understand how the server components of Wire work) -* on-premise operators/administrators (people who want to self-host Wire-Server on their own datacentres or cloud) -* developers (people who are working with the wire-server source code) - -If you are a developer, you may want to check out the "Notes for developers" first. - -This documentation may be expanded in the future to cover other aspects of Wire. - -.. toctree:: - :maxdepth: 1 - :caption: Contents: - :glob: - - Release notes - Administrator's Guide - Understanding wire-server components - Administrator's manual: single-sign-on and user provisioning - Client API documentation - Security responses - Notes for developers - -.. Overview - -.. commented out for now... - -.. Indices and tables -.. ================== - -.. * :ref:`genindex` -.. * :ref:`modindex` -.. * :ref:`search` diff --git a/docs/src/release-notes.rst b/docs/src/release-notes.md similarity index 51% rename from docs/src/release-notes.rst rename to docs/src/release-notes.md index 497af3cca3..478db87668 100644 --- a/docs/src/release-notes.rst +++ b/docs/src/release-notes.md @@ -1,14 +1,13 @@ -.. _release-notes: +(release-notes)= -Release notes -------------- +# Release notes This page previously contained the release notes for the project, and they were manually updated each time a new release was done, due to limitations in Github's «releases» feature. -However, Github since updated the feature, making this page un-necessary. +However, Github since updated the feature, making this page un-necessary. -Go to → `GitHub - wireapp/wire-server: Wire back-end services `_ +Go to → [GitHub - wireapp/wire-server: Wire back-end services](https://github.com/wireapp/wire-server/) -→ Look at releases on right hand side. They are shown by date of release. `Release Notes `_ +→ Look at releases on right hand side. They are shown by date of release. [Release Notes](https://github.com/wireapp/wire-server/releases) -→ Open the CHANGELOG.md. This will give you chart version. \ No newline at end of file +→ Open the CHANGELOG.md. This will give you chart version. diff --git a/docs/src/security-responses/2021-12-15_log4shell.md b/docs/src/security-responses/2021-12-15_log4shell.md new file mode 100644 index 0000000000..b567c21e74 --- /dev/null +++ b/docs/src/security-responses/2021-12-15_log4shell.md @@ -0,0 +1,90 @@ +# 2021-12 - log4shell + +Last updated: 2021-12-15 + +This page concerns ON-PREMISE (i.e. self-hosted) installations of wire-server as documented in and its possible vulnerability to “log4shell” / CVE-2021-44228 and CVE-2021-45046. + +## Introduction + +The “log4shell” vulnerability ([CVE-2021-44228](https://www.cve.org/CVERecord?id=CVE-2021-44228) and [CVE-2021-45046](https://www.cve.org/CVERecord?id=CVE-2021-45046)) concerns a logging library “log4j” used in Java or JVM software components. + +- Wire-server’s source code is not written in a JVM language (it's written mostly in Haskell), and as such, is not vulnerable. + +- Wire-server makes use of Cassandra, which is running on the JVM, however as of version 2.1 no longer makes use of log4j (it uses logback). Since the start of Wire’s on-premise product, we have used Cassandra versions > 3 (currently 3.11), which is not vulnerable. + +- Wire-server makes use of **Elasticsearch**, which **does use log4j. See the section below for details**. + +- All other components Wire-server’s on-premise current and near-time-future product relies on are not based on the JVM and as such are not vulnerable: + + > - Calling restund/SFT servers: written in C + > - Minio: written in Go + > - Redis: written in C + > - Nginx: written in C + > - Wire-Server: written in Haskell + > - Wire-Frontend (webapp, team settings): written in Javascript / NodeJS + > - Fake-aws components: based on localstack written in python or for SQS written in ruby + > - fake-aws-dynamodb: this component is JVM based and was used in the past on on-premise installations, but should not be in use anymore these days. If it is still in use in your environment, please stop using it: all recent versions of wire-server since June 2021 will not make use of that component anymore. Even if still in use, it does not store or log any user-provided data nor is it internet-facing and as such should pose little to no risk. + > - Upcoming releases may have wire-server-metrics: prometheus (Ruby), node-exporter (Golang) and Grafana (Golang) + > - Upcoming releases may have: Logging/Kibana: fluent-bit (C), Kibana (JavaScript), ElasticSearch (covered in section below) + +## Elasticsearch + +Wire uses Elasticsearch for for storing indexes used when searching for users in Wire. + +Elasticsearch clusters are not directly user-facing or internet-facing and it is therefore not immediately possible to inject problematic exploit strings into elasticsearch’s own logging (i.e. elasticsearch stores user-provided data, but doesn’t itself log this data). + +*Example: A Wire user display name will be stored inside elasticsearch, but not logged by elasticsearch (elasticsearch logs mostly contain information about connectivity to other elasticsearch processes)* + +Hypothetically, the log4shell exploit could be combined with another exploit which would allow an attacker to get Elasticsearch to log some of the data stored inside its cluster. As elasticsearch is not internet-facing, this doesn’t look easy to exploit. + +In addition as per Elastics’s [own information on the matter](https://discuss.elastic.co/t/apache-log4j2-remote-code-execution-rce-vulnerability-cve-2021-44228-esa-2021-31/291476) + +> "Elasticsearch 6 and 7 are not susceptible to remote code execution with this vulnerability due to our use of the Java Security Manager. Investigation into Elasticsearch 5 is ongoing. Elasticsearch running on JDK8 or below is susceptible to an information leak via DNS which is fixable by the JVM property identified below. The JVM option identified below is effective for Elasticsearch versions 5.5+, 6.5+, and 7+" + +The JVM property referred to is `-Dlog4j2.formatMsgNoLookups=true` + +[Update 15th December about CVE-2021-45046 from Elasitic](https://discuss.elastic.co/t/apache-log4j2-remote-code-execution-rce-vulnerability-cve-2021-44228-esa-2021-31/291476): + +> "Update 15 December: A further vulnerability (CVE-2021-45046) was disclosed on December 14th after it was found that the fix to address CVE-2021-44228 in Apache Log4j 2.15.0 was incomplete in certain non-default configurations. Our guidance for Elasticsearch \[...\] are unchanged by this new vulnerability" + +Wire on-premise installations contain a version of Elasticsearch between \[`6.6.0` and `6.8.18`\] at the time of writing. + +**As such, while ElasticSearch is affected, it is A. only susceptible to an information leak, not to remote code execution and B. not easily exploitable due to the way Wire uses ElasticSearch.** + +Still, if you’d like to avoid even the potential information leak problem: + +## Disable log4jLookups: + +If you have followed our official documentation on [https://docs.wire.com](https://docs.wire.com), then Elasticsearch on premise was set up using [wire-server-deploy](https://github.com/wireapp/wire-server-deploy) using the `./ansible/elasticsearch.yml` playbook, which installs a vulnerable Log4J `2.11.1`: + +``` +find / | grep -i log4j +./etc/elasticsearch/HOSTNAME/log4j2.properties +./usr/share/elasticsearch/lib/log4j-core-2.11.1.jar +./usr/share/elasticsearch/lib/log4j-1.2-api-2.11.1.jar +./usr/share/elasticsearch/lib/log4j-api-2.11.1.jar +``` + +The BSI [recommends](https://www.bsi.bund.de/SharedDocs/Cybersicherheitswarnungen/DE/2021/2021-549032-10F2.pdf?__blob=publicationFile&v=3) to mitigate setting the `log4j2.formatMsgNoLookups` to True in the JVM options. Elastic [recommends](https://discuss.elastic.co/t/apache-log4j2-remote-code-execution-rce-vulnerability-cve-2021-44228-esa-2021-31/291476) the same mitigation. + +You can do this in the concrete Wire on-premise case using: + +First, ssh to all your elasticsearch machines and do the following: + +```shell +find /etc/elasticsearch | grep jvm.options + +# set this variable with the filepath found from above, usually something like +# /etc/elasticsearch//jvm.options +JVM_OPTIONS_FILE= + +# run the following to add the mitigation log4j flag (command is idempotent) +grep "\-Dlog4j2.formatMsgNoLookups=True" "$JVM_OPTIONS_FILE" || echo "-Dlog4j2.formatMsgNoLookups=True" >> "$JVM_OPTIONS_FILE" +``` + +Next, restart your cluster using instructions provided in {ref}`restart-elasticsearch`. + +## Further information + +- A mitigation for this with fresh on-premise installations is introduced in [https://github.com/wireapp/wire-server-deploy/pull/526](https://github.com/wireapp/wire-server-deploy/pull/526) +- We have of course fully applied the above counter measures to our cloud offering. We have no evidence that this vulnerability was used to launch an attack before this. Any hypothetical undetected attack would have required additional security vulnerabilities to be successful. diff --git a/docs/src/security-responses/2021-12-15_log4shell.rst b/docs/src/security-responses/2021-12-15_log4shell.rst deleted file mode 100644 index 741d2622cc..0000000000 --- a/docs/src/security-responses/2021-12-15_log4shell.rst +++ /dev/null @@ -1,103 +0,0 @@ -2021-12 - log4shell --------------------- - -Last updated: 2021-12-15 - -This page concerns ON-PREMISE (i.e. self-hosted) installations of wire-server as documented in https://docs.wire.com and its possible vulnerability to “log4shell” / CVE-2021-44228 and CVE-2021-45046. - -Introduction -~~~~~~~~~~~~~ - -The “log4shell” vulnerability (`CVE-2021-44228 `__ and `CVE-2021-45046 `__) concerns a logging library “log4j” used in Java or JVM software components. - -* Wire-server’s source code is not written in a JVM language (it's written mostly in Haskell), and as such, is not vulnerable. - -* Wire-server makes use of Cassandra, which is running on the JVM, however as of version 2.1 no longer makes use of log4j (it uses logback). Since the start of Wire’s on-premise product, we have used Cassandra versions > 3 (currently 3.11), which is not vulnerable. - -* Wire-server makes use of **Elasticsearch**, which **does use log4j. See the section below for details**. - -* All other components Wire-server’s on-premise current and near-time-future product relies on are not based on the JVM and as such are not vulnerable: - - * Calling restund/SFT servers: written in C - - * Minio: written in Go - - * Redis: written in C - - * Nginx: written in C - - * Wire-Server: written in Haskell - - * Wire-Frontend (webapp, team settings): written in Javascript / NodeJS - - * Fake-aws components: based on localstack written in python or for SQS written in ruby - - * fake-aws-dynamodb: this component is JVM based and was used in the past on on-premise installations, but should not be in use anymore these days. If it is still in use in your environment, please stop using it: all recent versions of wire-server since June 2021 will not make use of that component anymore. Even if still in use, it does not store or log any user-provided data nor is it internet-facing and as such should pose little to no risk. - - * Upcoming releases may have wire-server-metrics: prometheus (Ruby), node-exporter (Golang) and Grafana (Golang) - - * Upcoming releases may have: Logging/Kibana: fluent-bit (C), Kibana (JavaScript), ElasticSearch (covered in section below) - -Elasticsearch -~~~~~~~~~~~~~ - -Wire uses Elasticsearch for for storing indexes used when searching for users in Wire. - -Elasticsearch clusters are not directly user-facing or internet-facing and it is therefore not immediately possible to inject problematic exploit strings into elasticsearch’s own logging (i.e. elasticsearch stores user-provided data, but doesn’t itself log this data). - -*Example: A Wire user display name will be stored inside elasticsearch, but not logged by elasticsearch (elasticsearch logs mostly contain information about connectivity to other elasticsearch processes)* - -Hypothetically, the log4shell exploit could be combined with another exploit which would allow an attacker to get Elasticsearch to log some of the data stored inside its cluster. As elasticsearch is not internet-facing, this doesn’t look easy to exploit. - -In addition as per Elastics’s `own information on the matter `__ - - "Elasticsearch 6 and 7 are not susceptible to remote code execution with this vulnerability due to our use of the Java Security Manager. Investigation into Elasticsearch 5 is ongoing. Elasticsearch running on JDK8 or below is susceptible to an information leak via DNS which is fixable by the JVM property identified below. The JVM option identified below is effective for Elasticsearch versions 5.5+, 6.5+, and 7+" - -The JVM property referred to is ``-Dlog4j2.formatMsgNoLookups=true`` - -`Update 15th December about CVE-2021-45046 from Elasitic `__: - - "Update 15 December: A further vulnerability (CVE-2021-45046) was disclosed on December 14th after it was found that the fix to address CVE-2021-44228 in Apache Log4j 2.15.0 was incomplete in certain non-default configurations. Our guidance for Elasticsearch [...] are unchanged by this new vulnerability" - -Wire on-premise installations contain a version of Elasticsearch between [``6.6.0`` and ``6.8.18``] at the time of writing. - -**As such, while ElasticSearch is affected, it is A. only susceptible to an information leak, not to remote code execution and B. not easily exploitable due to the way Wire uses ElasticSearch.** - -Still, if you’d like to avoid even the potential information leak problem: - -Disable log4jLookups: -~~~~~~~~~~~~~~~~~~~~~ - -If you have followed our official documentation on ``__, then Elasticsearch on premise was set up using `wire-server-deploy `__ using the ``./ansible/elasticsearch.yml`` playbook, which installs a vulnerable Log4J ``2.11.1``:: - - find / | grep -i log4j - ./etc/elasticsearch/HOSTNAME/log4j2.properties - ./usr/share/elasticsearch/lib/log4j-core-2.11.1.jar - ./usr/share/elasticsearch/lib/log4j-1.2-api-2.11.1.jar - ./usr/share/elasticsearch/lib/log4j-api-2.11.1.jar - -The BSI `recommends `__ to mitigate setting the ``log4j2.formatMsgNoLookups`` to True in the JVM options. Elastic `recommends `__ the same mitigation. - -You can do this in the concrete Wire on-premise case using: - -First, ssh to all your elasticsearch machines and do the following: - -.. code:: shell - - find /etc/elasticsearch | grep jvm.options - - # set this variable with the filepath found from above, usually something like - # /etc/elasticsearch//jvm.options - JVM_OPTIONS_FILE= - - # run the following to add the mitigation log4j flag (command is idempotent) - grep "\-Dlog4j2.formatMsgNoLookups=True" "$JVM_OPTIONS_FILE" || echo "-Dlog4j2.formatMsgNoLookups=True" >> "$JVM_OPTIONS_FILE" - -Next, restart your cluster using instructions provided in :ref:`restart-elasticsearch`. - -Further information -~~~~~~~~~~~~~~~~~~~ - -* A mitigation for this with fresh on-premise installations is introduced in ``__ - -* We have of course fully applied the above counter measures to our cloud offering. We have no evidence that this vulnerability was used to launch an attack before this. Any hypothetical undetected attack would have required additional security vulnerabilities to be successful. diff --git a/docs/src/security-responses/index.md b/docs/src/security-responses/index.md new file mode 100644 index 0000000000..a0c58f66ff --- /dev/null +++ b/docs/src/security-responses/index.md @@ -0,0 +1,14 @@ +(security-responses)= + +# Security responses + +% comment: The toctree directive below takes a list of the pages you want to appear in order, +% and '*' is used to include any other pages in the federation directory in alphabetical order + +```{toctree} +:glob: true +:maxdepth: 1 +:reversed: true + +* +``` diff --git a/docs/src/security-responses/index.rst b/docs/src/security-responses/index.rst deleted file mode 100644 index 1c1e3077c0..0000000000 --- a/docs/src/security-responses/index.rst +++ /dev/null @@ -1,16 +0,0 @@ -.. _security_responses: - -++++++++++++++++++ -Security responses -++++++++++++++++++ - -.. - comment: The toctree directive below takes a list of the pages you want to appear in order, - and '*' is used to include any other pages in the federation directory in alphabetical order - -.. toctree:: - :maxdepth: 1 - :glob: - :reversed: - - * diff --git a/docs/src/understand/api-client-perspective/authentication.md b/docs/src/understand/api-client-perspective/authentication.md new file mode 100644 index 0000000000..51cb738b85 --- /dev/null +++ b/docs/src/understand/api-client-perspective/authentication.md @@ -0,0 +1,435 @@ +# Authentication + +% useful vim replace commands when porting markdown -> restructured text: + +% :%s/.. raw:: html//g + +% :%s/ /.. _\1:/gc + +## Access Tokens + +The authentication protocol used by the API is loosely inspired by the +[OAuth2 protocol](http://oauth.net/2/). As such, API requests are +authorised through so-called [bearer +tokens](https://tools.ietf.org/html/rfc6750). For as long as a bearer +token is valid, it grants access to the API under the identity of the +user whose credentials have been used for the [login]. The +current validity of access tokens is `15 minutes`, however, that may +change at any time without prior notice. + +In order to obtain new access tokens without having to ask the user for +his credentials again, so-called "user tokens" are issued which are +issued in the form of a `zuid` HTTP +[cookie](https://en.wikipedia.org/wiki/HTTP_cookie). These cookies +have a long lifetime (if {ref}`persistent ` typically +at least a few months) and their use is strictly limited to the +{ref}`/access ` endpoint used for token refresh. +{ref}`Persistent ` access cookies are regularly +refreshed as part of an {ref}`access token refresh `. + +An access cookie is obtained either directly after registration or through a +subsequent {ref}`login `. A successful login provides both an access +cookie and and access token. Both access token and cookie must be stored safely +and kept confidential. User passwords should not be stored. + +As of yet, there is no concept of authorising third-party applications to +perform operations on the API on behalf of a user (Notable exceptions: +{ref}`sso`). Such functionality may be provided in the future through +standardised OAuth2 flows. + +To authorise an API request, the access token must be provided via the +HTTP `Authorization` header with the `Bearer` scheme as follows: + +``` +Authorization: Bearer fmmLpDSjArpksFv57r5rDrzZZlj... +``` + +While the API currently also supports passing the access token in the +query string of a request, this approach is highly discouraged as it +unnecessarily exposes access tokens (e.g. in server logs) and thus might +be removed in the future. + +(login)= + +## Login - `POST /login` + +A login is the process of authenticating a user either through a known secret in +a {ref}`password login ` or by proving ownership of a verified +phone number associated with an account in an {ref}`SMS login `. The +response to a successful login contains an access cookie in a `Set-Cookie` +header and an access token in the JSON response body. + +(login-cookies)= + +### Cookies + +There is a hard limit for the number of session-scoped access cookies and the same +amount of persistent access cookies per user account. When this number is +reached, old cookies are removed when new ones are issued. Thereby, the cookies +with the oldest expiration timestamp are removed first. The removal takes the +type of the cookie to issue into account. I.e. session cookies are replaced by +session cookies, persistent cookies are replaced by persistent cookies. + +To prevent performance issues and malicious usages of the API, there is a +throttling mechanism in place. When the maximum number of cookies of one type +are issued, it's checked that login calls don't happen too frequently (too +quickly after one another.) + +In case of throttling no cookie gets issued. The error response ([HTTP status +code 429](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429)) has +a `Retry-After` header which specifies the time to wait before accepting the +next request in Seconds. + +Being throttled is a clear indicator of incorrect API usage. There is no need to +login many times in a row on the same device. Instead, the cookie should be +re-used. + +The corresponding backend configuration settings are described in: +{ref}`auth-cookie-config` . + +(login-password)= + +### Password Login + +To perform a password login, send a `POST` request to the `/login` +endpoint, providing either a verified email address or phone number and +the corresponding password. For example: + +``` +POST /login HTTP/1.1 +[headers omitted] + +{ + "email": "me@wire.com", + "password": "Quo2Booz" +} +``` + +If a phone number is used, the `phone` field is used instead of +`email`. If a @handle is used, the `handle` field is used instead of +`email` (note that the handle value should be sent *without* the `@` +symbol). Assuming the credentials are correct, the API will respond with +a `200 OK` and an access token and cookie: + +``` +HTTP/1.1 200 OK +zuid=...; Expires=Fri, 02-Aug-2024 09:15:54 GMT; Domain=zinfra.io; Path=/access; HttpOnly; Secure +[other headers omitted] + +{ + "expires_in": 900, + "access_token": "fmmLpDSjArpksFv57r5rDrzZZlj...", + "token_type": "Bearer" +} +``` + +% + +> The `Domain` of the cookie will be different depending on the +> environment. + +The value of `expires_in` is the number of seconds that the +`access_token` is valid from the moment it was issued. + +As of yet, the `token_type` is always `Bearer`. + +(login-sms)= + +### SMS Login + +To perform an SMS login, first request an SMS code to be sent to a +verified phone number: + +``` +POST /login/send HTTP/1.1 +[headers omitted] + +{ + "phone": "+1234567890" +} +``` + +An SMS with a short-lived login code will be sent. Upon receiving the +SMS and extracting the code from it, the login can be performed using +the `phone` and `code` as follows: + +``` +POST /login HTTP/1.1 +[headers omitted] + +{ + "phone": "+1234567890", + "code": "123456" +} +``` + +A successful response is identical to that of a {ref}`password +login `. + +(login-persistent)= + +### Persistent Logins + +By default, access cookies are issued as [session +cookies](https://en.wikipedia.org/wiki/HTTP_cookie#Session_cookie) +with a validity of 1 week. Furthermore, these session cookies are not +refreshed as part of an {ref}`access token refresh `. To +request a `persistent` access cookie which does get refreshed, specify +the `persist=true` parameter during a login: + +``` +POST /login?persist=true HTTP/1.1 +[headers omitted] + +{ + "phone": "+1234567890", + "code": "123456" +} +``` + +All access cookies returned on registration are persistent. + +(token-refresh)= + +### FAQ: is my cookie a persistent cookie or a session cookie? + +When you log in **without** the `persist=true` query parameter, or +with persist=false, you get a `session cookie`, which means it has no +expiration date set, and will expire when you close the browser (and on +the backend has a validity of max 1 day or 1 week (configurable, see +current config in [hegemony](https://github.com/zinfra/hegemony)). +Example **session cookie**: + +``` +POST /login?persist=false + +Set-Cookie: zuid=(redacted); Path=/access; Domain=zinfra.io; HttpOnly; Secure +``` + +When you log in **with** `persist=true`, you get a persistent cookie, +which means it has *some* expiration date. In production this is +currently 56 days (again, configurable, check current config in +[hegemony](https://github.com/zinfra/hegemony)) and can be renewed +during token refresh. Example **persistent cookie**: + +``` +POST /login?persist=true + +Set-Cookie: zuid=(redacted); Path=/access; Expires=Thu, 10-Jan-2019 10:43:28 GMT; Domain=zinfra.io; HttpOnly; Secure +``` + +## Token Refresh - `POST /access` + +Since access tokens have a relatively short lifetime to limit the time +window of abuse for a captured token, they need to be regularly +refreshed. In order to refresh an access token, send a `POST` reques +to `/access`, including the access cookie in the `Cookie` header and +the old (possibly expired) access token in the `Authorization` header: + +``` +POST /access HTTP/1.1 +Authorization: Bearer fmmLpDSjArpksFv57r5rDrzZZlj... +Cookie: zuid=... +[other headers omitted] + + +``` + +Providing the old access token is not required but strongly recommended +as it will link the new access token to the old, enabling the API to see +the new access token as a continued session of the same client. + +As part of an access token refresh, the response may also contain a new +`zuid` access cookie in form of a `Set-Cookie` header. A client must +expect a new `zuid` cookie as part of any access token refresh and +replace the existing cookie appropriately. + +(cookies-1)= + +## Cookie Management + +(cookies-logout)= + +### Logout - `POST /access/logout` + +An explicit logout effectively deletes the cookie used to perform the +operation: + +``` +POST /access/logout HTTP/1.1 +Authorization: Bearer fmmLpDSjArpksFv57r5rDrzZZlj... +Cookie: zuid=... +[other headers omitted] + + +``` + +Afterwards, the cookie that was sent as part of the `Cookie` header is +no longer valid. + +If a client offers an explicit logout, this operation must be performed. +An explicit logout is especially important for Web clients. + +(cookies-labels)= + +### Labels + +Cookies can be labeled by specifying a `label` during login or +registration, e.g.: + +``` +POST /login?persist=true HTTP/1.1 +[headers omitted] + +{ + "phone": "+1234567890", + "code": "123456", + "label": "Google Nexus 5" +} +``` + +Specifying a label is recommended as it helps to identify the cookies in a +user-friendly way and allows {ref}`selective revocation ` based +on the labels. + +(cookies-list)= + +### Listing Cookies - `GET /cookies` + +To list the cookies currently associated with an account, send a `GET` +request to `/cookies`. The response will contain a list of cookies, +e.g.: + +``` +HTTP/1.1 200 OK +[other headers omitted] + +{ + "cookies": [ + { + "time": "2015-06-04T14:29:23.000Z", + "id": 967153183, + "type": "session", + "label": null + }, + { + "time": "2015-06-04T14:44:23.000Z", + "id": 942451749, + "type": "session", + "label": null + }, + ... + ] +} +``` + +Note that expired cookies are not automatically removed when they +expire, only as new cookies are issued. + +(cookies-revoke)= + +### Revoking Cookies - `POST /cookies/remove` + +Cookies can be removed individually or in bulk either by specifying the full +cookie structure as it is returned by {ref}`GET /cookies ` or only +by their labels in a `POST` request to `/cookies/remove`, alongside with the +user's credentials: + +``` +POST /cookies/remove HTTP/1.1 +[headers omitted] + +{ + "ids": [{}, {}, ...], + "labels": ["", "", ...] + "email": "me@wire.com", + "password": "secret" +} +``` + +Cookie removal currently requires an account with an email address and +password. + +(password-reset)= + +## Password Reset - `POST /password-reset` + +A password reset can be used to set a new password if the existing password +associated with an account has been forgotten. This is not to be confused with +the act of merely changing your password for the purpose of password rotation or +if you suspect your current password to be compromised. + +### Initiate a Password Reset + +To initiate a password reset, send a `POST` request to +`/password-reset`, specifying either a verified email address or phone +number for the account in question: + +``` +POST /password-reset HTTP/1.1 +[headers omitted] + +{ + "phone": "+1234567890" +} +``` + +For a phone number, the `phone` field would be used instead. As a +result of a successful request, either a password reset key and code is +sent via email or a password reset code is sent via SMS, depending on +whether an email address or a phone number was provided. Password reset +emails will contain a link to the [wire.com](https://www.wire.com/) +website which will guide the user through the completion of the password +reset, which means that the website will perform the necessary requests +to complete the password reset. To complete a password reset initiated +with a phone number, the completion of the password reset has to happen +from the mobile client application itself. + +Once a password reset has been initiated for an email address or phone +number, no further password reset can be initiated for the same email +address or phone number before the prior reset is completed or times +out. The current timeout for an initiated password reset is +`10 minutes`. + +### Complete a Password Reset + +To complete a password reset, the password reset code, together with the +new password and the `email` or `phone` used when initiating the +reset (or the opaque `key` sent by mail) are sent to +`/password-reset/complete` in a `POST` request: + +``` +POST /password-reset/complete HTTP/1.1 +[headers omitted] + +{ + "phone": "+1234567890", + "code": "123456", + "password": "new-secret-password" +} +``` + +There is a maximum of `3` attempts at completing a password reset, +after which the password reset code becomes invalid and a new password +reset must be initiated. + +A completed password reset results in all access cookies to be revoked, +requiring the user to {ref}`login `. + +## Related topics: SSO, Legalhold + +(sso)= + +### Single Sign-On + +Users that are part of a team, for which a team admin has configured SSO (Single Sign On), authentication can happen through SAML. + +More information: + +- {ref}`FAQ ` +- [setup howtos for various IdP vendors](https://docs.wire.com/how-to/single-sign-on/index.html) +- [a few fragments that may help admins](https://github.com/wireapp/wire-server/blob/develop/docs/reference/spar-braindump.md) + +### LegalHold + +Users that are part of a team, for which a team admin has configured "LegalHold", can add a so-called "LegalHold" device. The endpoints in use to authenticate for a "LegalHold" Device are the same as for regular users, but the access tokens they get can only use a restricted set of API endpoints. See also [legalhold documentation on wire-server](https://github.com/wireapp/wire-server/blob/develop/docs/reference/team/legalhold.md) diff --git a/docs/src/understand/api-client-perspective/authentication.rst b/docs/src/understand/api-client-perspective/authentication.rst deleted file mode 100644 index 52630c58a6..0000000000 --- a/docs/src/understand/api-client-perspective/authentication.rst +++ /dev/null @@ -1,476 +0,0 @@ -Authentication -============== - -.. useful vim replace commands when porting markdown -> restructured text: -.. :%s/.. raw:: html//g -.. :%s/ /.. _\1:/gc - -Access Tokens -------------- - -The authentication protocol used by the API is loosely inspired by the -`OAuth2 protocol `__. As such, API requests are -authorised through so-called `bearer -tokens `__. For as long as a bearer -token is valid, it grants access to the API under the identity of the -user whose credentials have been used for the login_. The -current validity of access tokens is ``15 minutes``, however, that may -change at any time without prior notice. - -In order to obtain new access tokens without having to ask the user for -his credentials again, so-called "user tokens" are issued which are -issued in the form of a ``zuid`` HTTP -`cookie `__. These cookies -have a long lifetime (if `persistent <#login-persistent>`__, typically -at least a few months) and their use is strictly limited to the -`/access <#token-refresh>`__ endpoint used for token refresh. -`Persistent <#login-persistent>`__ access cookies are regularly -refreshed as part of an `access token refresh <#token-refresh>`__. - -An access cookie is obtained either directly after -`registration `__ or through a -subsequent `login <#login>`__. A successful login provides both an -access cookie and and access token. Both access token and cookie must be -stored safely and kept confidential. User passwords should not be -stored. - -As of yet, there is no concept of authorising third-party applications to -perform operations on the API on behalf of a user (Notable exceptions: -:ref:`sso`). Such functionality may be provided in the future through -standardised OAuth2 flows. - -To authorise an API request, the access token must be provided via the -HTTP ``Authorization`` header with the ``Bearer`` scheme as follows: - -:: - - Authorization: Bearer fmmLpDSjArpksFv57r5rDrzZZlj... - -While the API currently also supports passing the access token in the -query string of a request, this approach is highly discouraged as it -unnecessarily exposes access tokens (e.g. in server logs) and thus might -be removed in the future. - -.. _login: - -Login - ``POST /login`` ------------------------ - -A login is the process of authenticating a user either through a known -secret in a `password login <#login-password>`__ or by proving ownership -of a verified phone number associated with an account in an `SMS -login <#login-sms>`__. The response to a successful login contains an -access cookie in a ``Set-Cookie`` header and an access token in the JSON -response body. - -.. _login-cookies: - -Cookies -~~~~~~~ - -There is a hard limit for the number of session-scoped access cookies and the same -amount of persistent access cookies per user account. When this number is -reached, old cookies are removed when new ones are issued. Thereby, the cookies -with the oldest expiration timestamp are removed first. The removal takes the -type of the cookie to issue into account. I.e. session cookies are replaced by -session cookies, persistent cookies are replaced by persistent cookies. - -To prevent performance issues and malicious usages of the API, there is a -throttling mechanism in place. When the maximum number of cookies of one type -are issued, it's checked that login calls don't happen too frequently (too -quickly after one another.) - -In case of throttling no cookie gets issued. The error response (`HTTP status -code 429 `_) has -a ``Retry-After`` header which specifies the time to wait before accepting the -next request in Seconds. - -Being throttled is a clear indicator of incorrect API usage. There is no need to -login many times in a row on the same device. Instead, the cookie should be -re-used. - -The corresponding backend configuration settings are described in: -:ref:`auth-cookie-config` . - -.. _login-password: - -Password Login -~~~~~~~~~~~~~~ - -To perform a password login, send a ``POST`` request to the ``/login`` -endpoint, providing either a verified email address or phone number and -the corresponding password. For example: - -:: - - POST /login HTTP/1.1 - [headers omitted] - - { - "email": "me@wire.com", - "password": "Quo2Booz" - } - -If a phone number is used, the ``phone`` field is used instead of -``email``. If a @handle is used, the ``handle`` field is used instead of -``email`` (note that the handle value should be sent *without* the ``@`` -symbol). Assuming the credentials are correct, the API will respond with -a ``200 OK`` and an access token and cookie: - -:: - - HTTP/1.1 200 OK - zuid=...; Expires=Fri, 02-Aug-2024 09:15:54 GMT; Domain=zinfra.io; Path=/access; HttpOnly; Secure - [other headers omitted] - - { - "expires_in": 900, - "access_token": "fmmLpDSjArpksFv57r5rDrzZZlj...", - "token_type": "Bearer" - } - -.. - - The ``Domain`` of the cookie will be different depending on the - environment. - -The value of ``expires_in`` is the number of seconds that the -``access_token`` is valid from the moment it was issued. - -As of yet, the ``token_type`` is always ``Bearer``. - - - -.. _login-sms: - -SMS Login -~~~~~~~~~ - -To perform an SMS login, first request an SMS code to be sent to a -verified phone number: - -:: - - POST /login/send HTTP/1.1 - [headers omitted] - - { - "phone": "+1234567890" - } - -An SMS with a short-lived login code will be sent. Upon receiving the -SMS and extracting the code from it, the login can be performed using -the ``phone`` and ``code`` as follows: - -:: - - POST /login HTTP/1.1 - [headers omitted] - - { - "phone": "+1234567890", - "code": "123456" - } - -A successful response is identical to that of a `password -login <#login-password>`__. - - - -.. _login-persistent: - -Persistent Logins -~~~~~~~~~~~~~~~~~ - -By default, access cookies are issued as `session -cookies `__ -with a validity of 1 week. Furthermore, these session cookies are not -refreshed as part of an `access token refresh <#token-refresh>`__. To -request a ``persistent`` access cookie which does get refreshed, specify -the ``persist=true`` parameter during a login: - -:: - - POST /login?persist=true HTTP/1.1 - [headers omitted] - - { - "phone": "+1234567890", - "code": "123456" - } - -All access cookies returned on registration are persistent. - - - -.. _token-refresh: - -FAQ: is my cookie a persistent cookie or a session cookie? -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -When you log in **without** the ``persist=true`` query parameter, or -with persist=false, you get a ``session cookie``, which means it has no -expiration date set, and will expire when you close the browser (and on -the backend has a validity of max 1 day or 1 week (configurable, see -current config in `hegemony `__). -Example **session cookie**: - -:: - - POST /login?persist=false - - Set-Cookie: zuid=(redacted); Path=/access; Domain=zinfra.io; HttpOnly; Secure - -When you log in **with** ``persist=true``, you get a persistent cookie, -which means it has *some* expiration date. In production this is -currently 56 days (again, configurable, check current config in -`hegemony `__) and can be renewed -during token refresh. Example **persistent cookie**: - -:: - - POST /login?persist=true - - Set-Cookie: zuid=(redacted); Path=/access; Expires=Thu, 10-Jan-2019 10:43:28 GMT; Domain=zinfra.io; HttpOnly; Secure - -Token Refresh - ``POST /access`` --------------------------------- - -Since access tokens have a relatively short lifetime to limit the time -window of abuse for a captured token, they need to be regularly -refreshed. In order to refresh an access token, send a ``POST`` reques -to ``/access``, including the access cookie in the ``Cookie`` header and -the old (possibly expired) access token in the ``Authorization`` header: - -:: - - POST /access HTTP/1.1 - Authorization: Bearer fmmLpDSjArpksFv57r5rDrzZZlj... - Cookie: zuid=... - [other headers omitted] - - - -Providing the old access token is not required but strongly recommended -as it will link the new access token to the old, enabling the API to see -the new access token as a continued session of the same client. - -As part of an access token refresh, the response may also contain a new -``zuid`` access cookie in form of a ``Set-Cookie`` header. A client must -expect a new ``zuid`` cookie as part of any access token refresh and -replace the existing cookie appropriately. - - - -.. _cookies: - -Cookie Management ------------------ - - - -.. _cookies-logout: - -Logout - ``POST /access/logout`` -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -An explicit logout effectively deletes the cookie used to perform the -operation: - -:: - - POST /access/logout HTTP/1.1 - Authorization: Bearer fmmLpDSjArpksFv57r5rDrzZZlj... - Cookie: zuid=... - [other headers omitted] - - - -Afterwards, the cookie that was sent as part of the ``Cookie`` header is -no longer valid. - -If a client offers an explicit logout, this operation must be performed. -An explicit logout is especially important for Web clients. - - - -.. _cookies-labels: - -Labels -~~~~~~ - -Cookies can be labeled by specifying a ``label`` during login or -registration, e.g.: - -:: - - POST /login?persist=true HTTP/1.1 - [headers omitted] - - { - "phone": "+1234567890", - "code": "123456", - "label": "Google Nexus 5" - } - -Specifying a label is recommended as it helps to identify the cookies in -a user-friendly way and allows `selective -revocation <#cookies-revoke>`__ based on the labels. - - - -.. _cookies-list: - -Listing Cookies - ``GET /cookies`` -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -To list the cookies currently associated with an account, send a ``GET`` -request to ``/cookies``. The response will contain a list of cookies, -e.g.: - -:: - - HTTP/1.1 200 OK - [other headers omitted] - - { - "cookies": [ - { - "time": "2015-06-04T14:29:23.000Z", - "id": 967153183, - "type": "session", - "label": null - }, - { - "time": "2015-06-04T14:44:23.000Z", - "id": 942451749, - "type": "session", - "label": null - }, - ... - ] - } - -Note that expired cookies are not automatically removed when they -expire, only as new cookies are issued. - - - -.. _cookies-revoke: - -Revoking Cookies - ``POST /cookies/remove`` -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Cookies can be removed individually or in bulk either by specifying the -full cookie structure as it is returned by `GET -/cookies <#cookies-list>`__ or only by their labels in a ``POST`` -request to ``/cookies/remove``, alongside with the user's credentials: - -:: - - POST /cookies/remove HTTP/1.1 - [headers omitted] - - { - "ids": [{}, {}, ...], - "labels": ["", "", ...] - "email": "me@wire.com", - "password": "secret" - } - -Cookie removal currently requires an account with an email address and -password. - - - -.. _password-reset: - -Password Reset - ``POST /password-reset`` ------------------------------------------ - -A password reset can be used to set a new password if the existing -password associated with an account has been forgotten. This is not to -be confused with the act of merely `changing your -password `__ for the purpose of password -rotation or if you suspect your current password to be compromised. - -Initiate a Password Reset -~~~~~~~~~~~~~~~~~~~~~~~~~ - -To initiate a password reset, send a ``POST`` request to -``/password-reset``, specifying either a verified email address or phone -number for the account in question: - -:: - - POST /password-reset HTTP/1.1 - [headers omitted] - - { - "phone": "+1234567890" - } - -For a phone number, the ``phone`` field would be used instead. As a -result of a successful request, either a password reset key and code is -sent via email or a password reset code is sent via SMS, depending on -whether an email address or a phone number was provided. Password reset -emails will contain a link to the `wire.com `__ -website which will guide the user through the completion of the password -reset, which means that the website will perform the necessary requests -to complete the password reset. To complete a password reset initiated -with a phone number, the completion of the password reset has to happen -from the mobile client application itself. - -Once a password reset has been initiated for an email address or phone -number, no further password reset can be initiated for the same email -address or phone number before the prior reset is completed or times -out. The current timeout for an initiated password reset is -``10 minutes``. - -Complete a Password Reset -~~~~~~~~~~~~~~~~~~~~~~~~~ - -To complete a password reset, the password reset code, together with the -new password and the ``email`` or ``phone`` used when initiating the -reset (or the opaque ``key`` sent by mail) are sent to -``/password-reset/complete`` in a ``POST`` request: - -:: - - POST /password-reset/complete HTTP/1.1 - [headers omitted] - - { - "phone": "+1234567890", - "code": "123456", - "password": "new-secret-password" - } - -There is a maximum of ``3`` attempts at completing a password reset, -after which the password reset code becomes invalid and a new password -reset must be initiated. - -A completed password reset results in all access cookies to be revoked, -requiring the user to `login <#login>`__. - -Related topics: SSO, Legalhold -------------------------------- - -.. _sso: - -Single Sign-On -~~~~~~~~~~~~~~~~~~ - -Users that are part of a team, for which a team admin has configured SSO (Single Sign On), authentication can happen through SAML. - -More information: - -* :ref:`FAQ ` -* `setup howtos for various IdP vendors `__ -* `a few fragments that may help admins `__ - - -LegalHold -~~~~~~~~~~ - -Users that are part of a team, for which a team admin has configured "LegalHold", can add a so-called "LegalHold" device. The endpoints in use to authenticate for a "LegalHold" Device are the same as for regular users, but the access tokens they get can only use a restricted set of API endpoints. See also `legalhold documentation on wire-server `__ diff --git a/docs/src/understand/api-client-perspective/index.md b/docs/src/understand/api-client-perspective/index.md new file mode 100644 index 0000000000..8bf19e4290 --- /dev/null +++ b/docs/src/understand/api-client-perspective/index.md @@ -0,0 +1,15 @@ +# Wire-server API documentation + +The following documentation provides information for, and takes the perspective of a Wire client developer. (wire-desktop, wire-android and wire-ios are examples of Wire Clients). This means only publicly accessible endpoints are mentioned. + +```{warning} +This section of the documentation is very incomplete at the time of writing (summer 2020) - more pages on the client API will follow in the future. +``` + +```{toctree} +:glob: true +:maxdepth: 2 +:titlesonly: true + +* +``` diff --git a/docs/src/understand/api-client-perspective/index.rst b/docs/src/understand/api-client-perspective/index.rst deleted file mode 100644 index d419508892..0000000000 --- a/docs/src/understand/api-client-perspective/index.rst +++ /dev/null @@ -1,14 +0,0 @@ -Wire-server API documentation -============================= - -The following documentation provides information for, and takes the perspective of a Wire client developer. (wire-desktop, wire-android and wire-ios are examples of Wire Clients). This means only publicly accessible endpoints are mentioned. - -.. warning:: - This section of the documentation is very incomplete at the time of writing (summer 2020) - more pages on the client API will follow in the future. - -.. toctree:: - :maxdepth: 2 - :glob: - :titlesonly: - - * diff --git a/docs/src/understand/api-client-perspective/swagger.rst b/docs/src/understand/api-client-perspective/swagger.md similarity index 67% rename from docs/src/understand/api-client-perspective/swagger.rst rename to docs/src/understand/api-client-perspective/swagger.md index 5dd3d29e36..057d5beb2a 100644 --- a/docs/src/understand/api-client-perspective/swagger.rst +++ b/docs/src/understand/api-client-perspective/swagger.md @@ -1,5 +1,4 @@ -Swagger API documentation (all public endpoints) -================================================ +# Swagger API documentation (all public endpoints) Our staging system provides swagger documentation of our public rest API. @@ -11,21 +10,19 @@ documentation still has some endpoints, but the new one is getting more and more Please check the new docs first, and if you can't find what you're looking for, double-check the old. -New docs --------- +## New docs These docs show swagger 2.0: -`new staging swagger page `_ +[new staging swagger page](https://staging-nginz-https.zinfra.io/api/swagger-ui/) - -Old docs --------- +## Old docs Some endpoints are only shown using swagger 1.2. At the time of writing, both swagger version 1.2 and version 2.0 are in use. If you are an employee of Wire, you can log in here and try out requests in the browser; if not, you can make use of the "List Operations" button on both 1.2 and 2.0 pages to see the possible API requests. -Browse to our `old staging swagger page `_ to see rendered swagger documentation for the remaining endpoints. +Browse to our [old staging swagger page](https://staging-nginz-https.zinfra.io/swagger-ui/) to see rendered swagger documentation for the remaining endpoints. -.. image:: img/swagger.png +```{image} img/swagger.png +``` diff --git a/docs/src/understand/federation/api.md b/docs/src/understand/federation/api.md new file mode 100644 index 0000000000..e48e642294 --- /dev/null +++ b/docs/src/understand/federation/api.md @@ -0,0 +1,338 @@ +(federation-api)= + +# Federation API + +(qualified-identifiers-and-names)= +## Qualified Identifiers and Names + +The federated architecture is reflected in the structure of the various +identifiers and names used in the API. Identifiers, such as user ids, are unique +within the context of a backend. They are made unique within the context of all +federating backend by combining them with the {ref}`backend domain +`. + +For example a user with user id `d389b370-5f7d-4efd-9f9a-8d525540ad93` on +backend `b.example.com` has the *qualified user id* +`d389b370-5f7d-4efd-9f9a-8d525540ad93@b.example.com`. In API request bodies +qualified identities are encoded as objects, e.g. + +``` +{ + "user": { + "id": "d389b370-5f7d-4efd-9f9a-8d525540ad93", + "domain": "b.example.com" + } + ... +} + +``` +In API path segments qualified identities are encoded with the domain first, e.g. +``` +POST /connections/b.example.com/d389b370-5f7d-4efd-9f9a-8d525540ad93 +``` +to send a connection request to a user. + +Any identifier on a backend can be qualified: + +- conversation ids +- team ids +- client ids +- user ids +- user handles, e.g. local handle `@alice` is displayed as `@alice@b.example.com` in federating users' devices + +User profile names (e.g. "Alice") which are not unique on the user\'s backend, +can be changed by the user at any time and are not qualified. + +(api-between-federators)= + +## Federated requests + +Every federated API request is made by a service component (e.g. brig, galley, +cargohold) in one backend and responded to by a service component in the other +backend. The *Federators* of the backends are relaying the request between the +components across backends . The components talk to each other via the +*Federator* in the originating domain and *Federator Ingress* in the receiving +domain (for details see {ref}`backend-to-backend-communication`). + + +```{figure} ./img/federation-apis-flow.png +--- +width: 100% +--- +Federators relaying a request between components. See {ref}`federation-back2back-example` to see the discovery, authentication and authorization steps that are omitted from this figure. +``` + +### API From Components to Federator + + +When making the call to the *Federator*, the components use HTTP2. They call the +Federator's `Outward` service, which accepts `POST` requests with path +`/rpc/:domain/:component/:rpc`. Such a request will be forwarded to the remote +Federator with the given {ref}`backend domain`, and converted +to the appropriate request of its `Inward` service. + +### API between Federators + +The layer between *Federator* acts as an envelope for communication +between other components of wire server. The *Inward* service of +*Federator* is an HTTP2 server which is responsible for accepting +external requests coming from other backends, and forwarding them to the +appropriate component (currently Brig or Galley). + +*Federator* inspects the header of an incoming requests, performs +discovery and authentication, as described in +{ref}`Backend to backend communication +`, then +forwards the request as-is by repackaging its body into an HTTP request +for the target component. + +The *Inward* service accepts only `POST` requests with a path of the +form `/federation/:component/:rpc`, where `:component` is the lowercase +name of the target component (i.e. `brig` or `galley`), and `:rpc` is +the name of the federation RPC to invoke. The arguments of the RPC are +contained the body, which is assumed to be of content type +`application/json`. + +See {ref}`api-from-federator-to-components` for more details on RPCs and their paths. + +(api-from-components-to-federator)= + + +(api-from-federator-to-components)= + +### API From Federator to Components + +The components expose a REST API over HTTP to be consumed by the +*Federator*. All the paths start with `/federation`. When a *Federator* +receives a `POST` request to `/federation/brig/get-user-by-handle`, it +connects to a local Brig and forwards the request to it after changing +its path to `/federation/get-user-by-handle`. + +The `/federation` prefix is kept in the path to allow the component to +distinguish federated requests from requests by clients or other local +components. + +If this request succeeds, the response is directly used as a response +for the original call to the `Inward` service. Otherwise, a response +with a `5xx` status code is returned, with a body containing a +description of the error that has occurred. + +Note that the name of the RPC (`get-user-by-handle` in the above +example) is required to be a single path segment consisting of only +ASCII characters within a restricted set. This prevents path-traversal +attacks such as attempting to access `/federation/../users/by-handle`. + +(api-endpoints)= + +## List of Federation APIs exposed by Components + +Each component of the backend provides an API towards the *Federator* +for access by other backends. + +```{note} +This reflects status of API endpoints as of 2023-01-10. For latest APIs please +refer to the corresponding source code linked in the individual section. +``` + +(brig)= + +### Brig + +In its current state, the primary purpose of the Brig API is to allow +users of remote backends to create conversations with the local users of +the backend. + +- `get-user-by-handle`: Given a handle, return the user profile + corresponding to that handle. +- `get-users-by-ids`: Given a list of user ids, return the list of + corresponding user profiles. +- `claim-prekey`: Given a user id and a client id, return a Proteus + pre-key belonging to that user. +- `claim-prekey-bundle`: Given a user id, return a prekey for each of + the user\'s clients. +- `claim-multi-prekey-bundle`: Given a list of user ids, return + prekeys of their respective clients. +- `search-users`: Given a term, search the user database for matches + w.r.t. that term. +- `get-user-clients`: Given a list of user ids, return the lists of + clients of each of the users. +- `get-user-clients`: Given a list of user ids, return a list of all their clients with public information +- `send-connection-action`: Make and also respond to user connection requests +- `on-user-deleted-connections`: Notify users that are connected to remote user about that user's deletion +- `get-mls-clients`: Request all [MLS](../../how-to/install/mls)-capable clients for a given user +- `claim-key-packages`: Claim a previously-uploaded KeyPackage of a remote user. User for adding users to MLS conversations. + +See [the brig source +code](https://github.com/wireapp/wire-server/blob/master/libs/wire-api-federation/src/Wire/API/Federation/API/Brig.hs) +for the current list of federated endpoints of *Brig*, as well as +their precise inputs and outputs. + +(galley)= + +### Galley + +Each backend keeps a record of the conversations that each of its +members is a part of. The purpose of the Galley API is to allow backends +to synchronize the state of the conversations of their members. + +- `get-conversations`: Given a qualified user id and a list of + conversation ids, return the details of the conversations. This + allows a remote backend to query conversation metadata of their + local user from this backend. To avoid metadata leaks, the backend + will check that the domain of the given user corresponds to the + domain of the backend sending the request. +- `get-sub-conversation`: Get a MLS subconversation +- `leave-conversation`: Given a remote user and a conversation id, + remove the the remote user from the (local) conversation. +- `mls-welcome`: Send MLS welcome message to a new user owned by the called backend +- `on-client-removed`: Inform called backend that a client of a user has been deleted +- `on-conversation-created`: Given a name and a list of conversation + members, create a conversation locally. This is used to inform + another backend of a new conversation that involves their local + user(s). +- `on-conversation-updated`: Given a qualified user id and a qualified + conversation id, update the conversation details locally with the + other data provided. This is used to alert remote backend of updates + in the conversation metadata of conversations in which at least one + of their local users is involved. +- `on-message-sent`: Given a remote message and a conversation id, + propagate a message to local users. This is used whenever there is a + remote user in a conversation (see end-to-end flows). +- `on-mls-message-sent`: Receive a MLS message that originates in the calling backend +- `on-new-remote-conversation`: Inform the called backend about a conversation that exists on the calling backend. This request is made before the first time the backend might learn about this conversation, e.g. when its first user is added to the conversation. +- `on-typing-indicator-updated`: Used by the calling backend (that owns a conversation) to inform the called backend about a change of the typing indicator status of remote user +- `on-user-deleted-conversations`: When a user on calling backend this request is made for all conversations on the called backend was part of +- `query-group-info`: Query the MLS public group state +- `send-message`: Given a sender and a raw message request, send a + message to a conversation owned by another backend. This is used + when the user sending a message is not on the same backend as the + conversation the message is sent in. +- `send-mls-commit-bundle`: Send a MLS commit bundle to backend that owns the conversation +- `send-mls-message`: Send MLS message to backend that owns the conversation +- `update-conversation`: Calling backend requests a conversation action on the called backend which owns the conversation + +See [the galley source +code](https://github.com/wireapp/wire-server/blob/master/libs/wire-api-federation/src/Wire/API/Federation/API/Galley.hs) +for the current list of federated endpoints of *Galley*, as well as +their precise inputs and outputs. + +(end-to-end-flows)= + +### Cargohold +- `get-asset`: Check if asset owned by called backend is available to calling backend +- `stream-asset`: Stream asset owned by the called backend + +See [the cargohold source +code](https://github.com/wireapp/wire-server/blob/master/libs/wire-api-federation/src/Wire/API/Federation/API/Cargohold.hs) +for the current list of federated endpoints of the *Cargohold*, as well as +their precise inputs and outputs. + +## Example End-to-End Flows + +In the following the interactions between *Federator* and *Federation Ingress* +components of the backends involved are omitted for simplicity. Also the backend +domain and infrastructure domain are assumed the same. + +Additionally we assume that the backend domain and the infrastructure domain of +the respective backends involved are the same and each domain identifies +a distinct backend. + +(user-discovery)= + +### User Discovery + +In this flow, the user *Alice* at *a.example.com* tries to search for user +*Bob* at *b.example.com*. + +1. User *Alice* enters the qualified user name of the target + user *Bob* : `@bob@b.example.com` into the search field of their Wire client. +2. The client issues a query to `/search/contacts` of the Brig + searching for *Bob* at *b.example.com*. +3. The Brig in *Alice*\'s backend asks its local *Federator* to query the + `search-users` endpoint in *Bob*\'s backend. +4. *Alice*\'s *Federator* queries *Bob*\'s Brig via *Bob*\'s *Federation + Ingress* and *Federator* as requested. +5. *Bob*\'s Brig replies with *Bob*\'s user name and qualified handle, the + response goes through *Bob*\'s *Federator* and *Federation Ingress*, + as well as *Alice*\'s *Federator* before it reaches *A*\'s Brig. +6. *Alice*\'s Brig forwards that information to *A*\'s client. + +(conversation-establishment)= + +### Conversation Establishment + +After having discovered user *Bob* at *b.example.com*, user *Alice* at +*a.example.com* wants to establish a conversation with *Bob*. + +1. From the search results of a + {ref}`user discovery` + process, *Alice* chooses to create a conversation with *Bob*. +2. *Alice*\'s client issues a `/users/b.example.com//prekeys` query to + *Alice*\'s Brig. +3. *Alice*\'s Brig asks its *Federator* to query the `claim-prekey-bundle` + endpoint of *Bob*\'s backend using *Bob*\'s user id. +4. *Bob*\'s *Federation Ingress* forwards the query to the *Federator*, + who in turn forwards it to the local Brig. +5. *Bob*\'s Brig replies with a prekey bundle for each of *Bob*\'s clients, + which is forwarded to *Alice*\'s Brig via *Bob*\'s *Federator* and + *Federation Ingress*, as well as *Alice*\'s *Federator*. +6. *Alice*\'s Brig forwards that information to *A*\'s client. +7. *Alice*\'s client queries the `/conversations` endpoint of its Galley + using *Bob*\'s user id. +8. *Alice*\'s Galley creates the conversation locally and queries the + `on-conversation-created` endpoint of *Bob*\'s Galley (again via its + local *Federator*, as well as *Bob*\'s *Federation Ingress* and + *Federator*) to inform it about the new conversation, including the + conversation metadata in the request. +9. *Bob*\'s Galley registers the conversation locally and confirms the + query. +10. *Bob*\'s Galley notifies *Bob*\'s client of the creation of the + conversation. + +(message-sending-a)= + +### Message Sending + +Having established a conversation with user *Bob* at *b.example.com*, user +*Alice* at *a.example.com* wants to send a message to user *Bob*. + +1. In a conversation *\@a.example.com* on *Alice*\'s backend with + users *Alice* and *Bob*, *Alice* sends a message + by using the `/conversations/a.example.com//proteus/messages` + endpoint on *Alice*\'s Galley. +2. *Alice*\'s Galley checks if *A* included all necessary user devices in + their request. For that it makes a `get-user-clients` request to + *Bob*\'s Galley. *Alice*\'s Galley checks that the returned list of + clients matches the list of clients the message was encrypted for. +3. *Alice*\'s Galley sends the message to all clients in the conversation + that are part of *Alice*\'s backend. +4. *Alice*\'s Galley queries the `on-message-sent` endpoint on *Bob*\'s + Galley via its *Federator* and *Bob*\'s *Federation Ingress* and + *Federator*. +5. *Bob*\'s Galley will propagate the message to all local clients + involved in the conversation. + +## Ownership + +Wire uses the concept of **ownership** as a guiding principle in the design of +Federation. Every resource, e.g. user, conversation, asset, is **owned** by the +backend on which it was *created*. + +A backend that owns a resource is the source of truth for it. For example, for +users this means that information about user *Alice* which is owned by backend +*A* is stored only on backend *A*. If any federating backend needs information +about the user *Alice*, e.g. the profile information, it needs to request that +information from *A*. + +In some cases backends locally store partial information of resources they don't +own. For example a backend stores a reference to any remotely-owned conversation +any of its users is participating in. However, to get the full list of all +participants of a remote conversation, the owning backend needs to be queried. + +Ownership is reflected in the naming convention of federation RPCs. Any rpc +named with prefix `on-` is always invoked by the backend that owns the resource +to inform federating backends. For example, if a user leaves a remote +conversation its backend would call the `leave-conversation` rpc on the remote +conversation. The remote backend would remove the user and inform all other +federating backends that participate in that conversation of this change by +calling their `on-conversation-updated` rpc. diff --git a/docs/src/understand/federation/api.rst b/docs/src/understand/federation/api.rst deleted file mode 100644 index 55e8d2aa77..0000000000 --- a/docs/src/understand/federation/api.rst +++ /dev/null @@ -1,283 +0,0 @@ -.. _federation-api: - -API -==== - -The Federation API consists of two *layers*: - 1. Between two backends (i.e. between a `Federator` and a `Federation - Ingress`) - 2. Between backend-internal components - -.. _qualified-identifiers-and-names: - -Qualified Identifiers and Names -------------------------------- - -The federated (and consequently distributed) architecture is reflected in the -structure of the various identifiers and names used in the API. Before -federation, identifiers were only unique in the context of a single backend; for -federation, they are made globally unique by combining them with the federation -domain of their backend. We call these combined identifiers *qualified* -identifiers. While other parts of some identifiers or names may change, the -domain name (i.e. the qualifying part) is static. - -In particular, we use the following identifiers throughout the API: - -* :ref:`Qualified User ID ` (QUID): `user_uuid@backend-domain.com` -* :ref:`Qualified User Name ` (QUN): `user_name@backend-domain.com` -* :ref:`Qualified Client ID ` (QDID) attached to a QUID: `client_uuid.user_uuid@backend-domain.com` -* :ref:`Qualified Conversation `/:ref:`Group ID ` (QCID/QGID): `backend-domain.com/groups/group_uuid` -* :ref:`Qualified Team ID ` (QTID): `backend-domain.com/teams/team_uuid` - -While the canonical representation for purposes of visualization is as displayed -above, the API often decomposes the qualified identifiers into an (unqualified) -id and a domain name. In the code and API documentation, we sometimes call a -username a "handle" and a qualified username a "qualified handle". - -Besides the above names and identifiers, there are also user :ref:`display names -` (sometimes also referred to as "profile names"), which are not -unique on the user's backend, can be changed by the user at any time and are not -qualified. - - -API between Federators ------------------------ - -The layer between `Federator` acts as an envelope for communication between -other components of wire server. The `Inward` service of `Federator` is an -HTTP2 server which is responsible for accepting external requests coming from -other backends, and forwarding them to the appropriate component (currently -Brig or Galley). - - -`Federator` inspects the header of an incoming requests, performs discovery and -authentication, as described in :ref:`Backend to backend communication -`, then forwards the request as-is by -repackaging its body into an HTTP request for the target component. - -The `Inward` service accepts only ``POST`` requests with a path of the form -``/federation/:component/:rpc``, where `:component` is the lowercase name of -the target component (i.e. ``brig`` or ``galley``), and ``:rpc`` is the name of -the federation RPC to invoke. The arguments of the RPC are contained the body, -which is assumed to be of content type ``application/json``. - -See :ref:`below ` for more details on RPCs -and their paths. - -API From Components to Federator --------------------------------- - -Between two federated backends, the components talk to each other via the -`Federator` in the originating domain and `Ingress` in the receiving domain. -When making the call to the `Federator`, the components use HTTP2. They call -the ``Outward`` service, which accepts ``POST`` requests with path -``/rpc/:domain/:component/:rpc``. Such a request will be forwarded to a remote -federator with the given :ref:`Backend domain `, and converted -to the appropriate request for its ``Inward`` service. - -.. _api-from-federator-to-components: - -API From Federator to Components --------------------------------- - -The components expose a REST API over HTTP to be consumed by the `Federator`. -All the paths start with ``/federation``. When a `Federator` receives a -``POST`` request to ``/federation/brig/get-user-by-handle``, it connects to a -local Brig and forwards the request to it after changing its path to -``/federation/get-user-by-handle``. - -The ``/federation`` prefix is kept in the path to allow the component to -distinguish federated requests from requests by clients or other local -components. - -If this request succeeds, the response is directly used as a response for the -original call to the ``Inward`` service. Otherwise, a response with a ``5xx`` -status code is returned, with a body containing a description of the error that -has occurred. - -Note that the name of the RPC (``get-user-by-handle`` in the above example) is -required to be a single path segment consisting of only ASCII characters within -a restricted set. This prevents path-traversal attacks such as attempting to -access ``/federation/../users/by-handle``. - -.. _api-endpoints: - -List of Federation APIs exposed by Components ---------------------------------------------- - -Each component of the backend provides an API towards the `Federator` for access -by other backends. For example on how these APIs are used, see the section on -:ref:`end-to-end flows`. - -.. note:: This reflects status of API endpoints as of 2022-01-28. For latest - APIs please refer to the corresponding source code linked in the - individual section. - -.. comment: The endpoints and objects are written manually. FUTUREWORK: Automate - this. - -Brig -^^^^ - -In its current state, the primary purpose of the Brig API is to -allow users of remote backends to create conversations with the local users of -the backend. - -* ``get-user-by-handle``: Given a handle, return the user profile - corresponding to that handle. -* ``get-users-by-ids``: Given a list of user ids, return the list of - corresponding user profiles. -* ``claim-prekey``: Given a user id and a client id, return a Proteus pre-key - belonging to that user. -* ``claim-prekey-bundle``: Given a user id, return a prekey for each of the - user's clients. -* ``claim-multi-prekey-bundle``: Given a list of user ids, return prekeys of - their respective clients. -* ``search-users``: Given a term, search the user database for matches w.r.t. - that term. -* ``get-user-clients``: Given a list of user ids, return the lists of clients of - each of the users. - -See `the brig source code -`_ -for the current list of federated endpoints of the `Brig`, as well as their -precise inputs and outputs. - -Galley -^^^^^^ - -Each backend keeps a record of the conversations that each of its members is a -part of. The purpose of the Galley API is to allow backends to synchronize the -state of the conversations of their members. - -* ``on-conversation-created``: Given a name and a list of conversation members, - create a conversation locally. This is used to inform another backend of a new - conversation that involves their local user(s). -* ``get-conversations``: Given a qualified user id and a list of conversation - ids, return the details of the conversations. This allows a remote backend to - query conversation metadata of their local user from this backend. To avoid - metadata leaks, the backend will check that the domain of the given user - corresponds to the domain of the backend sending the request. -* ``on-conversation-updated``: Given a qualified user id and a qualified - conversation id, update the conversation details locally with the other data - provided. This is used to alert remote backend of updates in the conversation - metadata of conversations in which at least one of their local users is involved. -* ``leave-conversation``: Given a remote user and a conversation id, remove the - the remote user from the (local) conversation. -* ``on-message-sent``: Given a remote message and a conversation id, propagate a message to local users. - This is used whenever there is a remote user in a conversation (see end-to-end flows). -* ``send-message``: Given a sender and a raw message request, send a message to - a conversation owned by another backend. This is used when the user sending a - message is not on the same backend as the conversation the message is sent in. - -See `the galley source code -`_ -for the current list of federated endpoints of the `Galley`, as well as their -precise inputs and outputs. - -.. _end-to-end-flows: - -End-to-End Flows ----------------- - -In the following end-to-end flows, we focus on the interaction between the Brigs -and Galleys of federated backends. While the interactions are facilitated by the -`Federator` and `Federation Ingress` components of the backends involved, which -handle the necessary discovery, authentication and authorization steps, we won't -mention these steps explicitly each time to keep the flows simple. - -Additionally we assume that the backend domain and the infra domain of the -respective backends involved are the same and each domain identifies a distinct -backend. - -.. _user-discovery: - -User Discovery -^^^^^^^^^^^^^^ - -In this flow, the user `A` at `backend-a.com` tries to search for user `B` at -`backend-b.com`. - -#. User `A@backend-a.com` enters the qualified user name of the target user - `B@backend-b.com` into the search field of their Wire client. -#. The client issues a query to ``/search/contacts`` of the Brig searching for - `B` at `backend-b.com`. -#. The Brig in `A`'s backend asks its local `Federator` to query the - ``search-users`` endpoint of B's backend for `B`. -#. `A`'s `Federator` queries `B`'s Brig via `B`'s `Federation Ingress` and - `Federator` as requested. -#. `B`'s Brig replies with `B`'s user name and qualified handle, the - response goes through `B`'s `Federator` and `Federation Ingress`, as well as - `A`'s `Federator` before it reaches `A`'s Brig. -#. `A`'s Brig forwards that information to `A`'s client. - -Conversation Establishment -^^^^^^^^^^^^^^^^^^^^^^^^^^ - -After having discovered user `B` at `backend-b.com`, user `A` at `backend-a.com` -wants to establish a conversation with `B`. - -#. From the search results of a :ref:`user discovery` process, - `A` chooses to create a conversation with `B`. -#. `A`'s client issues a ``/users/backend-b.com/B/prekeys`` query to `A`'s - Brig. -#. `A`'s Brig asks its `Federator` to query the ``claim-prekey-bundle`` endpoint - of `B`'s backend using `B`'s user id. -#. `B`'s `Federation Ingress` forwards the query to the `Federator`, who in turn forwards it to - the local Brig. -#. `B`'s Brig replies with a prekey bundle for each of `B`'s clients, which is - forwarded to `A`'s Brig via `B`'s `Federator` and `Federation Ingress`, as well as `A`'s - `Federator`. -#. `A`'s Brig forwards that information to `A`'s client. -#. `A`'s client queries the ``/conversations`` endpoint of its Galley - using `B`'s user id. -#. `A`'s Galley creates the conversation locally and queries the - ``on-conversation-created`` endpoint of `B`'s Galley (again via its local - `Federator`, as well as `B`'s `Federation Ingress` and `Federator`) to inform it about the new - conversation, including the conversation metadata in the request. -#. `B`'s Galley registers the conversation locally and confirms the query. -#. `B`'s Galley notifies `B`'s client of the creation of the conversation. - -Message Sending (A) -^^^^^^^^^^^^^^^^^^^ - -Having established a conversation with user `B` at `backend-b.com`, user `A` at -`backend-a.com` wants to send a message to user `B`. - -#. In a conversation `conv-1@backend-a.com` on `A`'s backend with users - `A@backend-a.com` and `B@backend-b.com`, `A` sends a message by using the - ``/conversations/backend-a.com/conv-1/proteus/messages`` endpoint - on `A`'s Galley. -#. `A`'s Galley checks if `A` included all necessary user devices in their - request. For that it makes a ``get-user-clients`` request to `B`'s Galley. - `A`'s Galley checks that the returned list of clients matches the list of - clients the message was encrypted for. -#. `A`'s Galley sends the message to all clients in the conversation that are - part of `A`'s backend. -#. `A`'s Galley queries the ``on-message-sent`` endpoint on `B`'s Galley via its - `Federator` and `B`'s `Federation Ingress` and `Federator`. -#. `B`'s Galley will propagate the message to all local clients involved in the - conversation. - -Message Sending (B) -^^^^^^^^^^^^^^^^^^^ - -Having received a message from user `A` at `backend-a.com`, user `B` at -`backend-b.com` wants send a reply. - -#. In a conversation `conv-1@backend-a.com` on `A`'s backend with users - `A@backend-a.com` and `B@backend-b.com`, `B` sends a message by using the - ``/conversations/backend-a.com/conv-1/proteus/messages`` endpoint - on `B`'s backend. -#. `B`'s Galley queries the ``send-message`` endpoint on `A`'s backend. - *Steps 3-6 below are essentially the same as steps 2-5 in Message Sending (A)* -#. `A`'s Galley checks if `A` included all necessary user devices in their - request. For that it makes a ``get-user-clients`` request to `B`'s Galley. - `A`'s Galley checks that the returned list of clients matches the list of - clients the message was encrypted for. -#. `A`'s Galley sends the message to all clients in the conversation that are - part of `A`'s backend. -#. `A`'s Galley queries the ``on-message-sent`` endpoint on `B`'s Galley via its - `Federator` and `B`'s `Federation Ingress` and `Federator`. -#. `B`'s Galley will propagate the message to all local clients involved in the - conversation. diff --git a/docs/src/understand/federation/architecture.md b/docs/src/understand/federation/architecture.md new file mode 100644 index 0000000000..6bb1e782bc --- /dev/null +++ b/docs/src/understand/federation/architecture.md @@ -0,0 +1,122 @@ +(federation-architecture)= +# Federation Achitecture + +(glossary_backend)= + +## Backends + +In the following we call a **backend** the set of servers, databases and DNS +configurations that together form one single Wire Server entity as seen from the +outside. It can also be called a Wire \"instance\" or \"server\" or \"Wire +installation\". Every resource (e.g. users, conversations, assets and teams) +exists and is *owned* by a single backend, which we can refer to as that +resource\'s backend. + +The communication between federated backends is facilitated by two components in +each backend: {ref}`federation_ingress` and {ref}`federator`. The *Federation +Ingress* is, as the name suggests, the ingress point for incoming connections +from other backends, which are then forwarded to the *Federator*. The +*Federator* forwards requests to internal components. It also acts as a *egress* +point for requests from internal backend components to other, remote backends. + +![image](img/federated-backend-architecture.png) + +(backend-domains)= + +(glossary_infra_domain)= +(glossary_backend_domain)= + +## Backend domains + +Each backend has two domain: an {ref}`infrastructure domain ` and a +{ref}`backend domain `. + +The **infrastructure domain** is the domain name under which the backend +is actually reachable via the network. It is also the domain name that +each backend uses in authenticating itself to other backends. + +Similarly, there is the **backend domain**, which is used to {ref}`qualify ` the +names and identifiers of users local to an individual backend in the +context of federation. + +The distinction between the two domains allows the owner of a backend +domain, e.g. `example.com`, to host their Wire backend under a +different infrastructure domain, e.g. `wire.infra.example.com`. + +(federation_ingress)= + +## Federation Ingress + +The *Federation Ingress* is a [Kubernetes +ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) +and uses [nginx](https://nginx.org/en/) as its underlying software. + +It is configured with a set of X.509 certificates, which acts as root of +trust for the authentication of the infrastructure domain of remote backends, as +well as with a certificate, which it uses to authenticate itself toward +other backends. + +Its functions are: + +- to terminate TLS connections +- to perform mutual {ref}`authentication` as part of the TLS connection establishment +- to forward requests to the local {ref}`federator` instance, along with the + remote backend\'s client certificate + +(federator)= + +## Federator + +The *Federator* performs additional authorization checks after receiving +federated requests from the *Federation Ingress* and acts as egress +point for other backend components. It can be configured to use an +{ref}`allow list +` to authorize incoming and +outgoing connections, and it keeps an X.509 client certificate for the +backend\'s infrastructure domain to authenticate itself towards other backends. +Additionally, it requires a connection to a DNS resolver to +{ref}`discover` other backends. + +When receiving a request from an internal component, the *Federator* +will: + +1. If enabled, ensure the target domain is in the allow list, +2. Discover the other backend, +3. Establish a {ref}`mutually authenticated channel ` to the other backend using its client certificate, +4. Send the request to the other backend and +5. Forward the response back to the originating component (and + eventually to the originating Wire client). + +The *Federator* also implements the authorization logic for incoming +requests and acts as intermediary between the *Federation Ingress* and +the internal components. The *Federator* will, for incoming requests +from remote backends (forwarded via the local +{ref}`Federation Ingress `): + +1. Discover the mapping + between backend domain claimed by the remote backend and its infra + domain, +2. Verify that the discovered infrastructure domain matches the domain in the + remote backend\'s client certificate, +3. If enabled, ensure that the backend domain of the other backend is + in the allow list. +4. Forward requests to other wire-server components. + +(other-wire-server)= + +## Service components + +Components such as Brig, Galley, Cargohold are responsible +for actual business logic and interfacing with databases and +non-federation related external services. See [source code +documentation](https://github.com/wireapp/wire-server). In the context +of federation, their functions include: + +- For incoming requests from other backends: + {ref}`per-request authorization` +- Outgoing requests to other backends are always sent via a local + Federator instance. + +For more information of the functionalities provided to remote backends +through their *Federator*, see the +{ref}`federated API documentation`. diff --git a/docs/src/understand/federation/architecture.rst b/docs/src/understand/federation/architecture.rst deleted file mode 100644 index c3d252011f..0000000000 --- a/docs/src/understand/federation/architecture.rst +++ /dev/null @@ -1,326 +0,0 @@ -Architecture and Network -========================= - -.. _federation-architecture: - -Architecture -------------- - -To facilitate connections between federated backends, two new components are -added to each backend: :ref:`Federation Ingress ` and -:ref:`Federator `. The `Federation Ingress` is, as the name suggests -the ingress point for incoming connections from other backends, which are then -forwarded to the `Federator`. The `Federator` then further processes the -requests. In addition, the `Federator` also acts as *egress* point for requests -from internal backend components to other, remote backends. - -.. image:: img/federated-backend-architecture.png - :width: 100% - -.. _backend-domains: - -Backend domains -^^^^^^^^^^^^^^^ - -Each backend has two domain strings: an `infrastructure domain` and a -`backend domain`. - -The `infrastructure domain` is the domain name under which the backend is -actually reachable via the network. It is also the domain name that each -backend uses in authenticating itself to other backends. - -Similarly, there is the `backend domain`, which is used to qualify the names and -identifiers of users local to an individual backend in the context of -federation. For example, a user with (unqualified) user name `jane_doe` at a -backend with backend domain `company-a.com` has the qualified user name -`jane_doe@company-a.com`, which is visible to users of other backends in the -context of federation. - -See :ref:`Qualified Identifiers and Names ` for -more information on qualified names and identifiers. - -The distinction between the two domains allows the owner of a (backend) domain -(e.g. `company-a.com`) to host their Wire backend under a different (infra) -domain (e.g. `wire.infra.company-a.com`). - - -Backend components -^^^^^^^^^^^^^^^^^^ - -In addition to the regular components of a Wire backend, two additional -components are added to enable federation with other backends: The `Federation -Ingress` and the `Federator`. Other Wire components use these two components to -contact other backends and respond to queries originating from remote backends. - -The following subsections briefly introduce the individual components, their -state and their functionality. The semantics of backend-to-backend communication -will be explained in more detail in the Section on :ref:`Federation API -`. - -.. _federation_ingress: - -Federation Ingress -~~~~~~~~~~~~~~~~~~ - -The `Federation Ingress` is a `kubernetes ingress -`_ and uses -`nginx `_ as its underlying software. - -It is configured with a set of X.509 certificates, which acts as root of trust -for the authentication of the infra domain of remote backends, as well as with a -certificate, which it uses to authenticate itself toward other backends. - -Its functions are: - -* terminate TLS connections - - - perform mutual :ref:`authentication` as part of the TLS connection - establishment - -* forward requests to the local :ref:`Federator ` instance, along - with the remote backend's client certificate - - -.. _federator: - -Federator -~~~~~~~~~ - -The `Federator` performs additional authorization checks after receiving -federated requests from the `Federation Ingress` and acts as egress point for -other backend components. It can be configured to use an :ref:`allow list -` to authorize incoming and outgoing connections, and it keeps an -X.509 client certificate for the backend's infra domain to authenticate itself -towards other backends. Additionally, it requires a connection to a DNS resolver -to :ref:`discover` other backends. - -When receiving a request from an internal component, the `Federator` will: - -#. If enabled, ensure the target domain is in the :ref:`allow list ` -#. :ref:`discover ` the other backend, -#. establish a :ref:`mutually authenticated channel ` to the - other backend using its client certificate, -#. send the request to the other backend and -#. forward the response back to the originating component (and eventually to the - originating Wire client). - -The `Federator` also implements the authorization logic for incoming requests and -acts as intermediary between the `Federation Ingress` and the internal -components. The `Federator` will, for incoming requests from remote backends -(forwarded via the local :ref:`Federation Ingress `): - -#. :ref:`Discover ` the mapping between backend domain claimed by the - remote backend and its infra domain, -#. verify that the discovered infra domain matches the domain in the remote - backend's client certificate, -#. if enabled, ensure that the backend domain of the other backend is in the - :ref:`allow list `, -#. forward requests to other wire-server components. - -.. _other-wire-server: - -Other wire-server components -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Components such as 'brig', 'galley', or 'gundeck' are responsible for actual -business logic and interfacing with databases and non-federation related -external services. See `source code documentation -`_. In the context of federation, their -functions include: - -* For incoming requests from other backends: :ref:`per-request authorization` -* Outgoing requests to other backends are always sent via a local :ref:`Federator` instance. - -For more information of the functionalities provided to remote backends through -their `Federator`, see the :ref:`federated API documentation`. - -.. _backend-to-backend-communication: - -Backend to backend communication --------------------------------------------- - -We require communication between the `Federator` of one (sending) backend and -the ingress of another (receiving) backend to be both mutually authenticated and -authorized. More specifically, both backends need to ensure the following: - -:Authentication: Determine the identity (infra domain name) of the other - backend. -:Discovery: Ensure that the other backend is authorized to represent the backend - domain claimed by the other backend. -:Authorization: Ensure that this backend is authorized to federate with the - other backend. - - -.. _authentication: - -Authentication -^^^^^^^^^^^^^^ - -.. warning:: As of January 2022, the implementation of mutual backend-to-backend - authentication is still subject to change. The behaviour described - in this section should be considered a draft specification only. - -Authentication between Wire backends is achieved using the mutual authentication -feature of TLS as defined in `RFC 8556 `_. - -In particular, this means that the ingress of each backend needs to be -provisioned with one or more certificates which the ingress trusts to -authenticate certificates provided by other backends when accepting incoming -connections. - -Conversely, every `Federator` needs to be provisioned with a (client) -certificate which it uses to authenticate itself towards other backends. - -Note that the client certificate is expected to be issued with the backend's -infra domain as one of the subject alternative names (SAN), which is defined in -`RFC 5280 `_. - -If a receiving backend fails to authenticate the client certificate, it should -reply with an :ref:`authentication error `. - - -.. _discovery: - -Discovery -^^^^^^^^^ - -The discovery process allows a backend to determine the infra domain of a given -backend domain. - -This step is necessary in two scenarios: - -* A backend would like to establish a connection to another backend that it only - knows the backend domain of. This is the case, for example, when a user of a - local backend searches for a :ref:`qualified username `, - which only includes that user's backend's backend domain. -* When receiving a message from another backend that authenticates with a given - infra domain and claims to represent a given backend domain, a backend would - like to ensure the backend domain owner authorized the owner of the infra - domain to run their Wire backend. - -To make discovery possible, any party hosting a Wire backend has to announce the -infra domain via a DNS `SRV` record as defined in `RFC 2782 -`_ with `service = wire-server-federator, -proto = tcp` and with `name` pointing to the backend's domain and `target` to -the backend's infra domain. - -For example, Company A with backend domain `company-a.com` and infra domain -`wire.company-a.com` could publish - -.. code-block:: bash - - _wire-server-federator._tcp.company-a.com. 600 IN SRV 10 5 443 federator.wire.company-a.com. - -A backend can then be discovered, given its domain, by issuing a DNS query for -the SRV record specifying the `wire-server-federator` service. - -DNS Scope -~~~~~~~~~ - -The network scope of the SRV record (as well as that of the DNS records for -backend and infra domain), depends on the desired federation topology in the -same way as other parameters such as the availability of the CA certificate that -allows authentication of the `Federation Ingress`' server certificate or the -`Federator`'s client certificate. The general rule is that the SRV entry should -be "visible" from the point of view of the desired federation partners. The -exact scope strongly depends on the network architecture of the backends -involved. - -SRV TTL and Caching -~~~~~~~~~~~~~~~~~~~ - -After retrieving the SRV record for a given domain, the local backend caches the -`backend domain <--> infra domain` mapping for the duration indicated in the TTL -field of the record. - -Due to this caching behaviour, the TTL value of the SRV record dictates at which -intervals remote backends will refresh their mapping of the local backend's -backend domain to infra domain. As a consequence a value in the order of -magnitude of 24 hours will reduce the amount of overhead for remote backends. - -On the other hand in the setup phase of a backend, or when a change of infra -domain is required, a TTL value in the magnitude of a few minutes allows remote -backends to recover more quickly from a change of infra domain. - -.. _authorization: - -Authorization -^^^^^^^^^^^^^ - -After an incoming connection is authenticated, a second step is required to -ensure that the sending backend is authorized to connect to the receiving -backend. As the backend authenticates using its infra domain, but the allow list -contains backend domains (which is not necessarily the same) the sending backend -also needs to provide its backend domain. - -To make this possible, requests to remote backends are required to contain a -`Wire-Origin-Domain` header, which contains the remote backend's domain. - -While the receiving backend has authenticated the sending backend as the infra -domain, it is not clear that the sending backend is indeed authorized by the -owner of the backend domain to host the Wire backend of that particular domain. - -To perform this extra authorization step, the receiving backend follows the -process described in :ref:`discovery` and checks that the discovered infra -domain for the backend domain indicated in the `Wire-Domain` header is one of -the Subject Alternative Names contained in the sending backend's client -certificate. If this is not the case, the receiving backend replies with a -:ref:`discovery error `. - -Finally, the receiving backend checks if the domain of the sending backend is in -the :ref:`allow-list` and replies with an :ref:`authorization error -` if it is not. - -.. _allow-list: - -Domain Allow List -~~~~~~~~~~~~~~~~~ - -Federation can happen between any backends on a network (e.g. the open -internet); or it can be restricted via server configuration to happen between a -specified set of domains on an 'allow list'. If an allow list is configured, -then: - -* outgoing requests will only happen if the requested domain is contained in the allow list. -* incoming requests: if the domain of the sending backend is not in the allow - list, any request originating from that domain is replied to with an - :ref:`authorization error ` - - -.. _per-request-authorization: - -Per-request authorization -~~~~~~~~~~~~~~~~~~~~~~~~~ - -In addition to the general authorization step that is performed by the federator -when a new, mutually authenticated TLS connection is established, the component -processing the request performs an additional, per-request authorization step. - -How this step is performed depends on the API endpoint, the contents of the -request and the context in which it is made. - -See the documentation of the individual :ref:`API endpoints ` for -details. - - -Example -^^^^^^^ - -The following is an example for the message and information flow between a -backend with backend domain `a.com` and infra domain `infra.a.com` and another -backend with backend domain `b.com` and infra domain `infra.b.com`. - -The content and format of the message is meant to be representative. For the -definitions of the actual payloads, please see the :ref:`federation -API` section. - -The scenario is that the brig at `infra.a.com` has received a user search -request from `Alice`, one of its clients. - -.. image:: img/federation-flow.png - :width: 100% - - - -.. - paths to images are currently listed at the end of the file. If you prefer to specify them directly in the paragraph they are used, that is also fine. diff --git a/docs/src/understand/federation/backend-communication.md b/docs/src/understand/federation/backend-communication.md new file mode 100644 index 0000000000..a71c6e158b --- /dev/null +++ b/docs/src/understand/federation/backend-communication.md @@ -0,0 +1,155 @@ +(backend-to-backend-communication)= + +# Backend to backend communication + +We require communication between the {ref}`federator` of one (sending) +backend and the {ref}`federation_ingress` of another (receiving) backend to be both +mutually authenticated and authorized. More specifically, both backends +need to ensure the following: + +- **Authentication** + + Determine the identity (infrastructure domain name) of the other backend. + +- **Discovery** + + Ensure that the other backend is authorized to represent the backend + domain claimed by the other backend. + +- **Authorization** + + Ensure that this backend is authorized to federate with the other backend. + +(authentication)= + +## Authentication + +Authentication between Wire backends is achieved using the mutual +authentication feature of TLS as defined in [RFC +8556](https://tools.ietf.org/html/rfc8446). + +In particular, this means that the ingress of each backend needs to be +provisioned with one or more trusted root certificates to authenticate +certificates provided by other backends when accepting incoming connections. + +Conversely, every *Federator* needs to be provisioned with a client +certificate which it uses to authenticate itself towards other backends. + +Note that the client certificate is required to be issued with the backend\'s +infrastructure domain as one of the subject alternative names (SAN), which is defined in +[RFC 5280](https://tools.ietf.org/html/rfc5280). + +See {ref}`federation-certificate-setup` for technical instructions. + +If a receiving backend fails to authenticate the client certificate, it fails the request +with an `AuthenticationFailure` error. + +(discovery)= + +## Discovery + +The discovery process allows a backend to determine the infrastructure domain of +a given backend domain. + +This step is necessary in two scenarios: + +- A backend would like to establish a connection to another backend + that it only knows the backend domain of. This is the case, for + example, when a user of a local backend searches for a + {ref}`qualified username `, which only includes the backend domain of that user's backend. +- When receiving a message from another backend that authenticates + with a given infrastructure domain and claims to represent a given backend + domain, a backend would like to ensure the backend domain owner + authorized the owner of the infrastructure domain to run their Wire backend. + +To make discovery possible, any party hosting a Wire backend has to +announce the infrastructure domain via a DNS *SRV* record as defined in [RFC +2782](https://tools.ietf.org/html/rfc2782) with +`service = wire-server-federator, proto = tcp` and with `name` pointing +to the backend\'s domain and *target* to the backend\'s infrastructure domain. + +For example, Company A with backend domain *company-a.com* and infrastructure domain *wire.company-a.com* could publish + +``` bash +_wire-server-federator._tcp.company-a.com. 600 IN SRV 10 5 443 federator.wire.company-a.com. +``` + +A backend can then be discovered, given its domain, by issuing a DNS +query for the SRV record specifying the *wire-server-federator* service. + +In case this process fails the Federator fails to forward the request with a `DiscoveryFailure` error. + +(dns-scope)= + + +(srv-ttl-and-caching)= + +### SRV TTL and Caching + +After retrieving the SRV record for a given domain, the local backend +caches the *backend domain \<\--\> infrastructure domain* mapping for the +duration indicated in the TTL field of the record. + +Due to this caching behavior, the TTL value of the SRV record dictates +at which intervals remote backends will refresh their mapping of the +local backend\'s backend domain to infrastructure domain. As a consequence a +value in the order of magnitude of 24 hours will reduce the amount of +overhead for remote backends. + +On the other hand in the setup phase of a backend, or when a change of infrastructure +domain is required, a TTL value in the magnitude of a few minutes allows remote +backends to recover more quickly from a change of the infrastructure domain. + +(authorization)= + +(allow-list)= + +## Authorization + +After an incoming connection is authenticated the backend authorizes the +request. It does so by verifying that the backend domain of the sender is +contained in the {ref}`domain allow list `. + +Since the request is authenticated only by the infrastructure domain the sending backend +is required to add its backend domain as a `Wire-Origin-Domain` header to the +request. The receiving backend follows the process described in {ref}`discovery` +and verifies that the discovered infrastructure domain for the backend domain indicated +in the `Wire-Origin-Domain` header is one of the Subject Alternative Names +contained in the client certificate used to sign the request. If this is not the +case, the receiving backend fails the request with a `ValidationError`. + +(per-request-authorization)= + +### Per-request authorization + +In addition to the general authorization step that is performed by the +federator when a new, mutually authenticated TLS connection is +established, the component processing the request performs an +additional, per-request authorization step. + +How this step is performed depends on the API endpoint, the contents of +the request and the context in which it is made. + +See the documentation of the individual {ref}`API endpoints ` for +details. + +(federation-back2back-example)= + +## Example + +The following is an example for the message and information flow between +a backend with backend domain `a.com` and infrastructure domain `infra.a.com` and +another backend with backend domain `b.com` and infrastructure domain +`infra.b.com`. + +The content and format of the message is meant to be representative. For +the definitions of the actual payloads, please see the {ref}`federation +API` section. + +The scenario is that the brig at `infra.a.com` has received a user +search request from *Alice*, one of its clients. + +```{image} img/federation-flow.png +:width: 100% +:align: center +``` diff --git a/docs/src/understand/federation/errors.rst b/docs/src/understand/federation/errors.rst deleted file mode 100644 index 9a44798e58..0000000000 --- a/docs/src/understand/federation/errors.rst +++ /dev/null @@ -1,23 +0,0 @@ -Error Codes -========================= - -This page describes the errors that can occur during federation. - -.. _authentication-errors: - -Authentication Errors ---------------------- - -TODO for now, we only describe the errors here. Later, we should add exact error codes. - -TODO we might want to merge one or more of these errors - -* _`authentication error`: occurs when a backend queries another backend and - provides either no client certificate, or a client certificate that the - receiving backend cannot authenticate -* _`authorization error`: occurs when a sending backend authenticates successfully, - but is not on the allow list of the receiving backend -* _`discovery error`: occurs when a sending backend authenticates - successfully, but the `SRV` record published for the claimed domain of the - sending backend doesn't match the SAN of the sending backend's client - certificate diff --git a/docs/src/understand/federation/faq.rst b/docs/src/understand/federation/faq.rst deleted file mode 100644 index 26420e4417..0000000000 --- a/docs/src/understand/federation/faq.rst +++ /dev/null @@ -1,4 +0,0 @@ -.. federation-faq: - -Federation FAQ -=============== diff --git a/docs/src/understand/federation/glossary.rst b/docs/src/understand/federation/glossary.rst deleted file mode 100644 index db099a1ecf..0000000000 --- a/docs/src/understand/federation/glossary.rst +++ /dev/null @@ -1,111 +0,0 @@ -.. _glossary: - -Federation Glossary -===================== - - -.. - note to documentation authors: - until https://github.com/rst2pdf/rst2pdf/issues/898 is fixed we should not use the glossary:: directive and not refer to items with the :term:`text to appear ` syntax. Instead, we can use explicit section labels and refer to them with :ref:`text to appear ` - -.. _glossary_backend: - -Backend - - A set of servers, databases and DNS configurations together forming one single Wire Server entity as seen from outside. This set of servers can be owned and administrated by different legal entities in different countries. - - Sometimes also called a Wire "instance" or "server" or "Wire installation". - Every resource (e.g. users, conversations, assets and teams) exists and is owned by one specific backend, which we can refer to as that resource's backend - -.. _glossary_backend_domain: - -Backend Domain - - The domain of a backend, which is used to qualify the names and identifiers of - resources (users, clients, groups, etc) that are local to a given backend. - See also the :ref:`Consequences of choosing a backend domain ` - -.. _glossary_infra_domain: - -Infrastructure Domain or Infra Domain - - The domain under which the :ref:`Federator ` of a given - backend is reachable (via that backend's :ref:`Ingress `) - for other, remote backends. - -.. _glossary_federation_ingress: - -Federation Ingress - - Federation Ingress is the first point of contact of a given :ref:`backend - ` for other, remote backends. It also deals with the - :ref:`authentication` of incoming requests. See :ref:`here ` for - more information. - -.. _glossary_federator: - -Federator - - The `Federator` is the local point of contact for :ref:`other backend - components ` that want to make calls to remote backends. - It is also the component that deals with the :ref:`authorization` of incoming - requests from other backends after they have passed the :ref:`Federation Ingress - `. See :ref:`here ` for more information. - -.. _glossary_asset: - -Asset - - Any file or image sent via Wire (uploaded to and downloaded from a backend). - -.. _glossary_qualified-user-id: - -Qualified User Identifier (QUID) - - A combination of a UUID (unique on the user's backend) and a domain. - -.. _glossary_qualified-user-name: - -Qualified User Name (QUN) - - A combination of a name that is unique on the user's backend and a domain. The - name is a string consisting of 2-256 characters which are either lower case - alphanumeric, dashes, underscores or dots. See `here - `_ - for the code defining the rules for user names. Note that in the wire-server - source code, user names are called 'Handle' and qualified user names - 'Qualified Handle'. - -.. _glossary_qualified-client-id: - -Qualified Client Identifier (QDID) - - A combination of a client identifier (a hash of the public key generated for a - user's client) concatenated with a dot and the QUID of the associated user. - -.. _glossary_qualified-group-id: - -Qualified Group Identifier (QGID) - - The string `backend-domain.com/groups/` concatenated with a UUID that is - unique on a given backend. - -.. _glossary_qualified-conversation-id: - -Qualified Conversation Identifier (QCID) - - The same as a :ref:`QGID `. - -.. _glossary_qualified-team-id: - -Qualified Team Identifier (QTID) - - The string `backend-domain.com/teams/` concatenated with a UUID that is - unique on a given backend. - -.. _glossary_display-name: - -(User) Profile/Display Name - - The profile/display name of a user is a UTF-8 encoded string with 1-128 - characters. diff --git a/docs/src/understand/federation/img/federation-apis-flow.png b/docs/src/understand/federation/img/federation-apis-flow.png new file mode 100644 index 0000000000..faf81be988 Binary files /dev/null and b/docs/src/understand/federation/img/federation-apis-flow.png differ diff --git a/docs/src/understand/federation/img/federation-apis-flow.txt b/docs/src/understand/federation/img/federation-apis-flow.txt new file mode 100644 index 0000000000..5771f1cb4b --- /dev/null +++ b/docs/src/understand/federation/img/federation-apis-flow.txt @@ -0,0 +1,32 @@ +title: Federated request from galley to remote brig + +Galley@a.com -> Federator@a.com: request + +note: +- API: From component to Federator +- `/rpc/b.com/brig/get-user-by-handle` + +Federator@a.com -> Federator@b.com: federated request + + +note: +- API: Federation API +- `Wire-Origin-Domain: a.com` +- `/federation/brig/get-user-by-handle` + +//group: TLS-secured backend-internal channel + + +Federator@b.com -> Brig@b.com: request + +note: +- API: Federator to component +- `Wire-Origin-Domain: a.com` +- `/federation/get-user-by-handle` + + +Brig@b.com -> Federator@b.com: response + +Federator@b.com -> Federator@a.com: response + +Federator@a.com -> Galley@a.com: response diff --git a/docs/src/understand/federation/img/federation-flow.png b/docs/src/understand/federation/img/federation-flow.png index f6558c63df..25a0014e24 100644 Binary files a/docs/src/understand/federation/img/federation-flow.png and b/docs/src/understand/federation/img/federation-flow.png differ diff --git a/docs/src/understand/federation/img/federation-flow.txt b/docs/src/understand/federation/img/federation-flow.txt index e4723c11cb..c9c4be4bd2 100644 --- a/docs/src/understand/federation/img/federation-flow.txt +++ b/docs/src/understand/federation/img/federation-flow.txt @@ -1,10 +1,19 @@ title: Federator to Ingress/Federator Flow -Brig @infra.a.com -> Federator @infra.a.com: (domain="b.com", component="brig", handle="alice") +Brig @infra.a.com -> Federator @infra.a.com: federated request + +note: +- `/rpc/b.com/brig/get-user-by-handle` +- `{"handle": "alice"}` + + +Federator @infra.a.com -> DNS Resolver: DNS lookup + +note: +`SRV _wire-server-federator._tcp.b.com` -Federator @infra.a.com -> DNS Resolver: DNS query: (service: "wire-server-federator", proto: "tcp", name: "b.com") -DNS Resolver -> Federator @infra.a.com: DNS response: (target: "infra.b.com") +DNS Resolver -> Federator @infra.a.com: DNS response: `infra.b.com` Federator @infra.a.com -> Ingress @infra.b.com: mTLS session establishment @@ -15,36 +24,52 @@ Ingress @infra.b.com -> Federator @infra.a.com: mTLS session establishment respo note: The channel between infra.a.com and infra.b.com is now encrypted and mutually authenticated. -Federator @infra.a.com -> Ingress @infra.b.com: (originDomain="a.com", component="brig", path="get-user-by-handle", body="alice") +Federator @infra.a.com -> Ingress @infra.b.com : request + +note: +- `Wire-Origin-Domain: a.com` +- `/federation/brig/get-user-by-handle` //group: TLS-secured backend-internal channel -Ingress @infra.b.com -> Federator @infra.b.com: (domain= "a.com", client_cert="", component="brig", path="get-user-by-handle", body="alice") +Ingress @infra.b.com -> Federator @infra.b.com: request + cert + +note: +- `X-SSL-Certificate: ` //end -Federator @infra.b.com -> DNS Resolver: DNS query: (service: "wire-server-federator", proto: "tcp", name: "a.com") +Federator @infra.b.com -> DNS Resolver: DNS query -DNS Resolver -> Federator @infra.b.com: DNS response: (target: "infra.a.com") +note: +`SRV _wire-server-federator._tcp.a.com` -//group: TLS-secured backend-internal channel +DNS Resolver -> Federator @infra.b.com: DNS response: `infra.a.com` -note: Check that the content of the _target_ field in the DNS response is one of the SANs in the client cert and that the content of the _domain_ field is on the allow list. +//group: TLS-secured backend-internal channel -Federator @infra.b.com -> Brig @infra.b.com: (originDomain= "a.com", component="brig", path="federation/get-user-by-handle" handle="alice") +note: +Check that +- that the `infra.a.com` is listed as one of SANs in the client cert +- `a.com` is in the allow list +Federator @infra.b.com -> Brig @infra.b.com: request -note: Perform per-request authorization. +note: +- `Wire-Origin-Domain: a.com` +- `/federation/get-user-by-handle` +- `{"handle": "alice"}` -Brig @infra.b.com -> Federator @infra.b.com: (UserProfile(Alice)) +note: Brig perform per-request authorization. -Federator @infra.b.com -> Ingress @infra.b.com: (UserProfile(Alice)) +Brig @infra.b.com -> Federator @infra.b.com: response: alice's user profile +Federator @infra.b.com -> Ingress @infra.b.com: response: alice's user profile //end -Ingress @infra.b.com -> Federator @infra.a.com: (UserProfile(Alice)) +Ingress @infra.b.com -> Federator @infra.a.com: response: alice's user profile note: Via the encrypted, mutually authenticated channel. -Federator @infra.a.com -> Brig @infra.a.com: (UserProfile(Alice)) +Federator @infra.a.com -> Brig @infra.a.com: response: alice's user profile diff --git a/docs/src/understand/federation/index.md b/docs/src/understand/federation/index.md new file mode 100644 index 0000000000..a1dc6b6cfa --- /dev/null +++ b/docs/src/understand/federation/index.md @@ -0,0 +1,30 @@ +(federation-understand)= + +# Wire Federation + +Wire Federation aims to allow multiple Wire-server +{ref}`backends ` to federate with each other: Users on on +different backends are be able to interact with each other as if they +are on the the same backend. + +Federated backends are be able to identify, discover and authenticate +one-another using the domain names under which they are reachable via the +network. To enable federation, administrators of a Wire backend can decide to +either specifically list the backends that they want to federate with, or to +allow federation with all Wire backends reachable from the network. See +{ref}`configure-federation`. + +```{note} +The Federation development is work in progress. +``` + +```{toctree} +--- +maxdepth: 2 +numbered: true +glob: true +--- +architecture +backend-communication +* +``` diff --git a/docs/src/understand/federation/index.rst b/docs/src/understand/federation/index.rst deleted file mode 100644 index ed25458ed3..0000000000 --- a/docs/src/understand/federation/index.rst +++ /dev/null @@ -1,26 +0,0 @@ -.. _federation-understand: - -+++++++++++++++++ -Wire federation -+++++++++++++++++ - -Wire Federation, once implemented, aims to allow multiple Wire-server :ref:`backends ` to federate with each other. That means that a user 1 registered on backend A and a user 2 registered on backend B should be able to interact with each other as if they belonged to the same backend. - -.. note:: - Federation is as of January 2022 still work in progress, since the implementation of federation is ongoing, and certain design decision are still subject to change. Where possible documentation will indicate the state of implementation. - - Some sections of the documentation are still incomplete (indicated with a 'TODO' comment). Check back later for updates. - -.. - comment: The toctree directive below takes a list of the pages you want to appear in order, - and '*' is used to include any other pages in the federation directory in alphabetical order - -.. toctree:: - :maxdepth: 2 - :numbered: - :glob: - - introduction - architecture - *roadmap - * diff --git a/docs/src/understand/federation/introduction.rst b/docs/src/understand/federation/introduction.rst deleted file mode 100644 index cb136ace9e..0000000000 --- a/docs/src/understand/federation/introduction.rst +++ /dev/null @@ -1,35 +0,0 @@ -Introduction -============ - -Federation is a feature that allows a collection of Wire backends to enable the -establishment of connections among their respective users. - -Goals ------ - -If two Wire backends A and B are *federated*, the goal is for users of backend A -to be able to communicate with users of backend B and vice-versa in the same way -as if they were both part of the same backend. - -Federated backends should be able to identify, discover and authenticate -one-another using the domain names under which they are reachable via the -network. - -To enable federation, administrators of a Wire backend can decide to either -specifically list the backends that they want to federate with, or to allow federation with all Wire backends reachable from the network. - -Federation is facilitated by two backend components: the *Federation Ingress*, -which, as the name suggests, acts as ingress point for federated traffic and the -*Federator*, which acts as egress point and processes all ingress requests from -the Federation Ingress after the authentication step. - -Non-Goals ---------- - -We aim to integrate federation into the Wire backend following a step-by-step -process as described in the :ref:`federation roadmap`. Early -versions are not meant to enable a completely open federation, but rather a -closed network of federated backends with a restricted set of features. - -The aim of federation is not to replace the existing organizational structures -for Wire users such as teams and groups, but rather to complement them. diff --git a/docs/src/understand/federation/roadmap.rst b/docs/src/understand/federation/roadmap.rst deleted file mode 100644 index 03c427a6b2..0000000000 --- a/docs/src/understand/federation/roadmap.rst +++ /dev/null @@ -1,85 +0,0 @@ -.. _federation-roadmap: - -Implementation Roadmap -======================= - -Internally at Wire, we have divided implemention of federation into multiple milestones. Only the milestone on which implementation has started will be shown here (as later milestones are subject to internal change and re-ordering) - -M1 federation with proteus MVP ------------------------------- - -The first milestone **M1** is a minimum-viable-product that allows users on different Wire backends to send textual messages to users on other backends. - -M1 included support for: - -* user search -* creating group conversations -* message sending -* visual UX for showing federation. -* a way for on-premise (self-hosted) installations of wire to try out this implementation of federation by explicitly enabling it via configuration flags. -* Android, Web and iOS will be supported -* server2server discovery and authentication -* a way to specify an allow list of backends to federate with - - -M2 federation with calling/conferencing and assets --------------------------------------------------- - -The second milestone **M2** focused on: - -* federated calling -* federated conferencing -* basic federated asset support. - -**M2** also incorporated a previous interim release which added the following in a federated environment: - -* likes -* mentions -* read receipts and delivery notifications -* pings -* edit and delete messages - -Caveats: - -* Message delivery guarantees are weak if any backends are temporarily unavailable. -* If any backends are unavailable, data inconsistencies may occur. -* Federation with the production cloud version of wire.com is not yet supported. -* Federated conferencing requires an SFT in each domain represented in the conversation. The caller's SFT is the "anchor" SFT, to which federated SFTs connect: - - * SFTs must have valid certificates suitable for mutual authentication with federated SFTs. - * Currently all video streams are exchanged between the anchor SFT and each federated SFT. The SFTs select the relevant streams for each client as today, but inter-SFT traffic could use substantially more bandwidth than an SFT to client stream. - * The administrator needs to open ports between their SFTs and federated SFTs for signalling and media. -* Assets will be stored on the backend of the sender and fetched via the sender's backend with every access (there is no caching on a federated domain). If federated domains have different policies for allowed asset types or sizes, a user may receive notification of an asset which it is not allowed to fetch or view. - -.. note:: - A rough (Backend) Implementation Status as of January 2022: - - Tested in M2 scope: - * Federator as Egress, and Ingress support to allow backend-backend communication - * Long-running test environments - * Backend Discovery via SRV records - * Backend allow list support - * User search via exact handle - * Get user profile, user clients, and prekeys for their clients - * Create conversation with remote users - * Send a message in a conversation with remote users - * Server2server authentication - * connections - * Assets - * Calling - * Conferencing - - Partially done: - * client-server API changes for federation - * Other conversation features (removing users, archived/muted, ...) - -Additional Milestones ---------------------- - -Some additional milestones planned include the following features: - -* support more features (guest users, bots, ...) -* support better message delivery guarantees -* federation API versioning strategy -* support for wire-server installations to federate with wire.com -* MLS support diff --git a/docs/src/understand/helm.md b/docs/src/understand/helm.md new file mode 100644 index 0000000000..9b27659a75 --- /dev/null +++ b/docs/src/understand/helm.md @@ -0,0 +1,61 @@ +(understand-helm)= + +# Understanding helm + +See also the official [helm documentation](https://docs.helm.sh/). This page is meant to explain a few concepts directly relevant when installing wire-server helm charts. + +(understand-helm-overrides)= + +## Overriding helm configuration settings + +### Default values + +Default values are under a specific chart's `values.yaml` file, e.g. for the chart named `cassandra-ephemeral`, this file: [charts/cassandra-ephemeral/values.yaml](https://github.com/wireapp/wire-server/blob/develop/charts/cassandra-ephemeral/values.yaml). When you install or upgrade a chart, with e.g.: + +``` +helm upgrade --install my-cassandra wire/cassandra-ephemeral +``` + +Then the default values from above are used. + +### Overriding + +Overriding parts of the yaml configuration can be achieved by passing `-f path/to/override-file.yaml` when installing or upgrading a helm chart, like this: + +Create file my-file.yaml: + +```yaml +cassandra-ephemeral: + resources: + requests: + cpu: "2" +``` + +Now you can install that chart with a custom value (using 2 cpu cores): + +``` +helm upgrade --install my-cassandra wire/cassandra-ephemeral -f my-values.yaml +``` + +### Sub charts + +If a chart uses sub charts, there can be overrides in the parent +chart's `values.yaml` file, if namespaced to the sub chart. +Example: if chart `parent` includes chart `child`, and +`child`'s `values.yaml` has a default value `foo: bar`, and the +`parent` chart's `values.yaml` has a value + +```yaml +child: + foo: baz +``` + +then the value that will be used for `foo` by default is `baz` when you install the parent chart. + +Note that if you `helm install parent` but wish to override values for `child`, you need to pass them as above, indented underneath `child:` as above. + +### Multiple overrides + +If `-f ` is used multiple times, the last file wins in case keys exist +multiple times (there is no merge performed between multiple files passed to `-f`). +This can lead to unexpected results. If you use multiple files with `-f`, ensure they don't overlap. diff --git a/docs/src/understand/helm.rst b/docs/src/understand/helm.rst deleted file mode 100644 index 3899186182..0000000000 --- a/docs/src/understand/helm.rst +++ /dev/null @@ -1,64 +0,0 @@ -.. _understand-helm: - -Understanding helm -=================== - -See also the official `helm documentation `__. This page is meant to explain a few concepts directly relevant when installing wire-server helm charts. - - -.. _understand-helm-overrides: - -Overriding helm configuration settings ------------------------------------------- - -Default values -^^^^^^^^^^^^^^ - -Default values are under a specific chart's ``values.yaml`` file, e.g. for the chart named ``cassandra-ephemeral``, this file: `charts/cassandra-ephemeral/values.yaml `__. When you install or upgrade a chart, with e.g.:: - - helm upgrade --install my-cassandra wire/cassandra-ephemeral - -Then the default values from above are used. - -Overriding -^^^^^^^^^^^ - -Overriding parts of the yaml configuration can be achieved by passing ``-f path/to/override-file.yaml`` when installing or upgrading a helm chart, like this: - -Create file my-file.yaml: - -.. code:: yaml - - cassandra-ephemeral: - resources: - requests: - cpu: "2" - -Now you can install that chart with a custom value (using 2 cpu cores):: - - helm upgrade --install my-cassandra wire/cassandra-ephemeral -f my-values.yaml - -Sub charts -^^^^^^^^^^^ - -If a chart uses sub charts, there can be overrides in the parent -chart's ``values.yaml`` file, if namespaced to the sub chart. -Example: if chart ``parent`` includes chart ``child``, and -``child``'s ``values.yaml`` has a default value ``foo: bar``, and the -``parent`` chart's ``values.yaml`` has a value - -.. code:: yaml - - child: - foo: baz - -then the value that will be used for ``foo`` by default is ``baz`` when you install the parent chart. - -Note that if you ``helm install parent`` but wish to override values for ``child``, you need to pass them as above, indented underneath ``child:`` as above. - -Multiple overrides -^^^^^^^^^^^^^^^^^^^^ - -If ``-f `` is used multiple times, the last file wins in case keys exist -multiple times (there is no merge performed between multiple files passed to `-f`). -This can lead to unexpected results. If you use multiple files with `-f`, ensure they don't overlap. diff --git a/docs/src/understand/index.md b/docs/src/understand/index.md new file mode 100644 index 0000000000..f7ca56369a --- /dev/null +++ b/docs/src/understand/index.md @@ -0,0 +1,17 @@ +(understand)= + +# Understanding wire-server components + +This section is almost empty, more documentation will come soon... + +```{toctree} +:glob: true +:maxdepth: 1 + +Overview +Audio/video calling, restund servers (TURN/STUN) +Conference Calling 2.0 (SFT) +Minio +Helm +Federation +``` diff --git a/docs/src/understand/index.rst b/docs/src/understand/index.rst deleted file mode 100644 index 3cca9519a8..0000000000 --- a/docs/src/understand/index.rst +++ /dev/null @@ -1,17 +0,0 @@ -.. _understand: - -Understanding wire-server components -==================================== - -This section is almost empty, more documentation will come soon... - -.. toctree:: - :maxdepth: 1 - :glob: - - Overview - Audio/video calling, restund servers (TURN/STUN) - Conference Calling 2.0 (SFT) - Minio - Helm - Federation diff --git a/docs/src/understand/minio.rst b/docs/src/understand/minio.md similarity index 86% rename from docs/src/understand/minio.rst rename to docs/src/understand/minio.md index 0c8fb60c38..afd4e1cd27 100644 --- a/docs/src/understand/minio.rst +++ b/docs/src/understand/minio.md @@ -1,10 +1,8 @@ -Minio -====== +# Minio -Official minio documentation available: ``_ +Official minio documentation available: [https://docs.min.io/](https://docs.min.io/) -Minio philosophy ------------------ +## Minio philosophy Minio clusters are configured with a fixed size once, and cannot be resized afterwards. It is thus important to make a good conservative estimate about @@ -23,8 +21,7 @@ cluster is starting to get full, you will need to set up a parallel bigger cluster, mirror everything to the new cluster, swap the DNS entries to the new one, and then decommission the old one. -Hurdles from the trenches: disk usage statistics; directories vs. disks ------------------------------------------------------------------------ +## Hurdles from the trenches: disk usage statistics; directories vs. disks I have done some more go code reading and have solved more minio mysteries. tl;dr: if you want to be safe, run minio on disks, not @@ -35,7 +32,7 @@ to figure out the amount of available blocks. If it's not a mount directory, it will just call `du .` in a for loop and update some counter (which sounds like a bad strategy to me). -https://github.com/minio/minio/blob/e6d8e272ced8b54872c6df1ef2ad556092280224/cmd/posix.go#L320-L352 + so the answer is: if you use minio, e.g. with mountpoints, it will silently do the right thing and if you configure it to use two directories on the same diff --git a/docs/src/understand/notes/port-ranges.md b/docs/src/understand/notes/port-ranges.md new file mode 100644 index 0000000000..94191336da --- /dev/null +++ b/docs/src/understand/notes/port-ranges.md @@ -0,0 +1,36 @@ +--- +orphan: true +--- + +(port-ranges)= + +# Note on port ranges + +Some parts of Wire (SFT, Restund) related to conference calling and Audio/Video, establish outgoing connections in a range of UDP ports. Which ports are used is determined by the kernel using `/proc/sys/net/ipv4/ip_local_port_range`. + +The /proc/sys/net/ipv4/ip_local_port_range defines the local port range that is used by TCP and UDP traffic to choose the local port. + +You will see in the parameters of this file two numbers: The first number is the first local port allowed for TCP and UDP traffic on the server, the second is the last local port number. + +When setting up firewall rules, this entire range must be allowed for both UDP and TCP. + +This range is defined by the system, and is set by the `/proc/sys/net/ipv4/ip_local_port_range` parameter. + +You read this range for your system by running the following command: + +```bash +cat /proc/sys/net/ipv4/ip_local_port_range +``` + +Or by finding the following line in your `/etc/sysctl.conf` file, if it exists: + +``` +# Allowed local port range +net.ipv4.ip_local_port_range = 32768 61000 +``` + +To change the range, edit the `/etc/sysctl.conf` file or run the following command: + +```bash +echo "32768 61001" > /proc/sys/net/ipv4/ip_local_port_range +``` diff --git a/docs/src/understand/notes/port-ranges.rst b/docs/src/understand/notes/port-ranges.rst deleted file mode 100644 index 0d2cc4e13b..0000000000 --- a/docs/src/understand/notes/port-ranges.rst +++ /dev/null @@ -1,36 +0,0 @@ -:orphan: - -.. _port-ranges: - -Note on port ranges -=================== - -Some parts of Wire (SFT, Restund) related to conference calling and Audio/Video, establish outgoing connections in a range of UDP ports. Which ports are used is determined by the kernel using ``/proc/sys/net/ipv4/ip_local_port_range``. - -The /proc/sys/net/ipv4/ip_local_port_range defines the local port range that is used by TCP and UDP traffic to choose the local port. - -You will see in the parameters of this file two numbers: The first number is the first local port allowed for TCP and UDP traffic on the server, the second is the last local port number. - -When setting up firewall rules, this entire range must be allowed for both UDP and TCP. - -This range is defined by the system, and is set by the ``/proc/sys/net/ipv4/ip_local_port_range`` parameter. - -You read this range for your system by running the following command: - -.. code-block:: bash - - cat /proc/sys/net/ipv4/ip_local_port_range - -Or by finding the following line in your ``/etc/sysctl.conf`` file, if it exists: - -.. code-block:: - - # Allowed local port range - net.ipv4.ip_local_port_range = 32768 61000 - -To change the range, edit the ``/etc/sysctl.conf`` file or run the following command: - -.. code-block:: bash - - echo "32768 61001" > /proc/sys/net/ipv4/ip_local_port_range - diff --git a/docs/src/understand/overview.md b/docs/src/understand/overview.md new file mode 100644 index 0000000000..56f203f707 --- /dev/null +++ b/docs/src/understand/overview.md @@ -0,0 +1,143 @@ +(overview)= + +# Overview + +## Introduction + +In a simplified way, the server components for Wire involve the following: + +```{image} img/architecture-server-simplified.png +``` + +The Wire clients (such as the Wire app on your phone) connect either directly (or via a load balancer) to the "Wire Server". By "Wire Server" we mean multiple API server components that connect to each other, and which also connect to a few databases. Both the API components and the databases are each in a "cluster", which means copies of the same program code runs multiple times. This allows any one component to fail without users noticing that there is a problem (also called +"high-availability"). + +## Architecture and networking + +Note that the webapp, account pages, and team-settings, while in a way not part of the backend, +are installed with the rest and therefore included. + +### Focus on internet protocols + +```{image} ./img/architecture-tls-on-prem-2020-09.png +``` + +### Focus on high-availability + +The following diagram shows a usual setup with multiple VMs (Virtual Machines): + +```{image} ../how-to/install/img/architecture-server-ha.png +``` + +Wire clients (such as the Wire app on your phone) connect to a load balancer. + +The load balancer forwards traffic to the ingress inside the kubernetes VMs. (Restund is special, see {ref}`understand-restund` for details on how Restund works.) + +The nginx ingress pods inside kubernetes look at incoming traffic, and forward that traffic on to the right place, depending on what's inside the URL passed. For example, if a request comes in for `https://example-https.example.com`, it is forwarded to a component called `nginz`, which is the main entry point for the [wire-server API](https://github.com/wireapp/wire-server). If, however, a request comes in for `https://webapp.example.com`, it is forwarded to a component called [webapp](https://github.com/wireapp/wire-webapp), which hosts the graphical browser Wire client (as found when you open [https://app.wire.com](https://app.wire.com)). + +Wire-server needs a range of databases. Their names are: cassandra, elasticsearch, minio, redis, etcd. + +All the server components on one physical machine can connect to all the databases (also those on a different physical machine). The databases each connect to each-other, e.g. cassandra on machine 1 will connect to the cassandra VMs on machines 2 and 3. + +### Backend components startup + +The Wire server backend is designed to run on a kubernetes cluster. From a high level perspective the startup sequence from machine power-on to the Wire server being ready to receive requests is as follow: + +1. *Kubernetes node power on*. Systemd starts the kubelet service which makes the worker node available to kubernetes. For more details about kubernetes startup refer to [the official kubernetes documentation](https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/). For details about the installation and configuration of kubernetes and worker nodes for Wire server see {ref}`Installing kubernetes and databases on VMs with ansible ` +2. *Kubernetes workload startup*. Kubernetes will ensure that Wire server workloads installed via helm are scheduled on available worker nodes. For more details about workload scheduling refer to [the official kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/). For details about how to install Wire server with helm refer to {ref}`Installing wire-server (production) components using Helm `. +3. *Stateful workload startup*. Systemd starts the stateful services (cassandra, elasticsearch and minio). See for instance [ansible-cassandra role](https://github.com/wireapp/ansible-cassandra/blob/master/tasks/systemd.yml#L10) and other database installation instructions in {ref}`Installing kubernetes and databases on VMs with ansible ` +4. *Other services*. Systemd starts the restund docker container. See [ansible-restund role](https://github.com/wireapp/ansible-restund/blob/9807313a7c72ffa40e74f69d239404fd87db65ab/templates/restund.service.j2#L12-L19). For details about docker container startup [consult the official documentation](https://docs.docker.com/get-started/overview/#docker-architecture) + +```{note} +For more information about Virual Machine startup or operating system level service startup, please consult your virtualisation and operating system documentation. +``` + +### Focus on pods + +The Wire backend runs in [a kubernetes cluster](https://kubernetes.io/), with different components running in different [pods](https://kubernetes.io/docs/concepts/workloads/pods/). + +This is a list of those pods as found in a typical installation. + +HTTPS Entry points: + +- `nginx-ingress-controller-controller`: [Ingress](https://kubernetes.github.io/ingress-nginx/) exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. +- `nginx-ingress-controller-default-backend`: [The default backend](https://kubernetes.github.io/ingress-nginx/user-guide/default-backend/) is a service which handles all URL paths and hosts the nginx controller doesn't understand (i.e., all the requests that are not mapped with an Ingress), that is 404 pages. Part of `nginx-ingress`. + +Frontend pods: + +- `webapp`: The fully functioning Web client (like ). [This pod](https://github.com/wireapp/wire-docs/blob/master/src/how-to/install/helm.rst#what-will-be-installed) serves the web interface itself, which then interfaces with other services/pods, such as the APIs. +- `account-pages`: [This pod](https://github.com/wireapp/wire-docs/blob/master/src/how-to/install/helm.rst#what-will-be-installed) serves Web pages for user account management (a few pages relating to e.g. password reset). +- `team-settings`: Team management Web interface (like ). + +Pods with an HTTP API: + +- `brig`: [The user management API service](https://github.com/wireapp/wire-server/tree/develop/services/brig). Connects to `cassandra` and `elastisearch` for user data storage, sends emails and SMS for account validation. +- `cannon`: [WebSockets API Service](https://github.com/wireapp/wire-server/blob/develop/services/cannon/). Holds WebSocket connections. +- `cargohold`: [Asset Storage API Service](https://docs.wire.com/how-to/install/aws-prod.html). Amazon-AWS-S3-style services are used by `cargohold` to store encrypted files that users are sharing amongst each other, such as images, files, and other static content, which we call assets. All assets except profile pictures are symmetrically encrypted before storage (and the keys are only known to the participants of the conversation in which an assets was shared - servers have no knowledge of the keys). +- `galley`: [Conversations and Teams API Service](https://docs.wire.com/understand/api-client-perspective/index.html). Data is stored in cassandra. Uses `gundeck` to send notifications to users. +- `nginz`: Public API Reverse Proxy (Nginx with custom libzauth module). A modified copy of nginx, compiled with a specific set of upstream extra modules, and one important additional module zauth_nginx_module. Responsible for user authentication validation. Forwards traffic to all other API services (except federator) +- `spar`: [Single Sign On (SSO)](https://en.wikipedia.org/wiki/Single_sign-on) and [SCIM](https://en.wikipedia.org/wiki/System_for_Cross-domain_Identity_Management). Stores data in cassandra. +- `gundeck`: Push Notification Hub (WebSocket/mobile push notifications). Uses redis as a temporary data store for websocket presences. Uses Amazon SNS and SQS. +- `federator`: [Connects different wire installations together](https://docs.wire.com/understand/federation/index.html). Wire Federation, once implemented, aims to allow multiple Wire-server backends to federate with each other. That means that a user 1 registered on backend A and a user 2 registered on backend B should be able to interact with each other as if they belonged to the same backend. + +Supporting pods and data storage: + +- `cassandra-ephemeral` (or `cassandra-external`): [NoSQL Database management system](https://github.com/wireapp/wire-server/tree/develop/charts/cassandra-ephemeral) (). Everything stateful in wire-server (cassandra is used by `brig`, `galley`, `gundeck` and `spar`) is stored in cassandra. + \* `cassandra-ephemeral` is for test clusters where persisting the data (i.e. loose users, conversations,...) does not matter, but this shouldn't be used in production environments. + \* `cassandra-external` is used to point to an external cassandra cluster which is installed outside of Kubernetes. +- `demo-smtp`: In "demo" installations, used to replace a proper external SMTP server for the sending of emails (for example verification codes). In production environments, an actual SMTP server is used directly instead of this pod. () +- `fluent-bit`: A log processor and forwarder, allowing collection of data such as metrics and logs from different sources. Not typically deployed. () +- `elastisearch-ephemeral` (or `elastisearch-external`): [Distributed search and analytics engines, stores some user information (name, handle, userid, teamid)](https://github.com/wireapp/wire-server/tree/develop/charts/elastisearch-external). Information is duplicated here from cassandra to allow searching for users. Information here can be re-populated from data in cassandra (albeit with some downtime for search functionality) (). + \* `elastisearch-ephemeral` is for test clusters where persisting the data doesn't matter. + \* `elastisearch-external` refers to elasticsearch IPs located outside kubernetes by specifying IPs manually. +- `fake-aws-s3`: Amazon-AWS-S3-compatible object storage using MinIO (), used by cargohold to store (encrypted) assets such as files, posted images, profile pics, etc. +- `fake-aws-s3-reaper`: Creates the default S3 bucket inside fake-aws-s3. +- `fake-aws-sns`. [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/NotificationHowTo.html), used to push messages to mobile devices or distributed services. SNS can publish a message once, and deliver it one or more times. +- `fake-aws-sqs`: [Amazon Simple Queue Service (Amazon SQS) queue](https://docs.aws.amazon.com/AmazonS3/latest/userguide/NotificationHowTo.html), used to transmit any volume of data without requiring other services to be always available. +- `redis-ephemeral`: Stores websocket connection assignments (part of the `gundeck` / `cannon` architecture). + +Short running jobs that run during installation/upgrade (these should usually be in the status 'Completed' except immediately after installation/upgrade): + +- `cassandra-migrations`: Used to initialize or upgrade the database schema in cassandra (for example when the software is upgraded to a new version). +- `galley-migrate-data`: Used to upgrade data in `cassandra` when the data model changes (for example when the software is upgraded to a new version). +- `brig-index-migrate-data`: Used to upgrade data in `cassandra` when the data model changes in brig (for example when the software is upgraded to a new version) +- `elastisearch-index-create`: [Creates](https://github.com/wireapp/wire-server/blob/develop/charts/elasticsearch-index/templates/create-index.yaml#L29) an Elastisearch index for brig. +- `spar-migrate-data`: [Used to update spar data](https://github.com/wireapp/wire-server/blob/develop/charts/cassandra-migrations/templates/spar-migrate-data.yaml) in cassandra when schema changes occur. + +As an example, this is the result of running the `kubectl get pods --namespace wire` command to obtain a list of all pods in a typical cluster: + +```shell +NAMESPACE NAME READY STATUS RESTARTS AGE +wire account-pages-54bfcb997f-hwxlf 1/1 Running 0 85d +wire brig-58bc7f844d-rp2mx 1/1 Running 0 3h54m +wire brig-index-migrate-data-s7lmf 0/1 Completed 0 3h33m +wire cannon-0 1/1 Running 0 3h53m +wire cargohold-779bff9fc6-7d9hm 1/1 Running 0 3h54m +wire cassandra-ephemeral-0 1/1 Running 0 176d +wire cassandra-migrations-66n8d 0/1 Completed 0 3h34m +wire demo-smtp-784ddf6989-7zvsk 1/1 Running 0 176d +wire elasticsearch-ephemeral-86f4b8ff6f-fkjlk 1/1 Running 0 176d +wire elasticsearch-index-create-l5zbr 0/1 Completed 0 3h34m +wire fake-aws-s3-77d9447b8f-9n4fj 1/1 Running 0 176d +wire fake-aws-s3-reaper-78d9f58dd4-kf582 1/1 Running 0 176d +wire fake-aws-sns-6c7c4b7479-nzfj2 2/2 Running 0 176d +wire fake-aws-sqs-59fbfbcbd4-ptcz6 2/2 Running 0 176d +wire federator-6d7b66f4d5-xgkst 1/1 Running 0 3h54m +wire galley-5b47f7ff96-m9zrs 1/1 Running 0 3h54m +wire galley-migrate-data-97gn8 0/1 Completed 0 3h33m +wire gundeck-76c4599845-4f4pd 1/1 Running 0 3h54m +wire nginx-ingress-controller-controller-2nbkq 1/1 Running 0 9d +wire nginx-ingress-controller-controller-8ggw2 1/1 Running 0 9d +wire nginx-ingress-controller-default-backend-dd5c45cf-jlmbl 1/1 Running 0 176d +wire nginz-77d7586bd9-vwlrh 2/2 Running 0 3h54m +wire redis-ephemeral-master-0 1/1 Running 0 176d +wire spar-8576b6845c-npb92 1/1 Running 0 3h54m +wire spar-migrate-data-lz5ls 0/1 Completed 0 3h33m +wire team-settings-86747b988b-5rt45 1/1 Running 0 50d +wire webapp-54458f756c-r7l6x 1/1 Running 0 3h54m + 1/1 Running 0 3h54m +``` + +```{note} +This list is not exhaustive, and your installation may have additional pods running depending on your configuration. +``` diff --git a/docs/src/understand/overview.rst b/docs/src/understand/overview.rst deleted file mode 100644 index 71d2f2a45d..0000000000 --- a/docs/src/understand/overview.rst +++ /dev/null @@ -1,148 +0,0 @@ -Overview -======== - -Introduction ------------- - -In a simplified way, the server components for Wire involve the following: - -|arch-simplified| - -The Wire clients (such as the Wire app on your phone) connect either directly (or via a load balancer) to the "Wire Server". By "Wire Server" we mean multiple API server components that connect to each other, and which also connect to a few databases. Both the API components and the databases are each in a "cluster", which means copies of the same program code runs multiple times. This allows any one component to fail without users noticing that there is a problem (also called -"high-availability"). - -Architecture and networking ----------------------------- - -Note that the webapp, account pages, and team-settings, while in a way not part of the backend, -are installed with the rest and therefore included. - -Focus on internet protocols -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -|arch-proto| - - -Focus on high-availability -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The following diagram shows a usual setup with multiple VMs (Virtual Machines): - -|arch-ha| - -Wire clients (such as the Wire app on your phone) connect to a load balancer. - -The load balancer forwards traffic to the ingress inside the kubernetes VMs. (Restund is special, see :ref:`understand-restund` for details on how Restund works.) - -The nginx ingress pods inside kubernetes look at incoming traffic, and forward that traffic on to the right place, depending on what's inside the URL passed. For example, if a request comes in for ``https://example-https.example.com``, it is forwarded to a component called ``nginz``, which is the main entry point for the `wire-server API `__. If, however, a request comes in for ``https://webapp.example.com``, it is forwarded to a component called `webapp `__, which hosts the graphical browser Wire client (as found when you open ``__). - -Wire-server needs a range of databases. Their names are: cassandra, elasticsearch, minio, redis, etcd. - -All the server components on one physical machine can connect to all the databases (also those on a different physical machine). The databases each connect to each-other, e.g. cassandra on machine 1 will connect to the cassandra VMs on machines 2 and 3. - -Backend components startup -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The Wire server backend is designed to run on a kubernetes cluster. From a high level perspective the startup sequence from machine power-on to the Wire server being ready to receive requests is as follow: - -1. *Kubernetes node power on*. Systemd starts the kubelet service which makes the worker node available to kubernetes. For more details about kubernetes startup refer to `the official kubernetes documentation `__. For details about the installation and configuration of kubernetes and worker nodes for Wire server see :ref:`Installing kubernetes and databases on VMs with ansible ` -2. *Kubernetes workload startup*. Kubernetes will ensure that Wire server workloads installed via helm are scheduled on available worker nodes. For more details about workload scheduling refer to `the official kubernetes documentation `__. For details about how to install Wire server with helm refer to :ref:`Installing wire-server (production) components using Helm `. -3. *Stateful workload startup*. Systemd starts the stateful services (cassandra, elasticsearch and minio). See for instance `ansible-cassandra role `__ and other database installation instructions in :ref:`Installing kubernetes and databases on VMs with ansible ` -4. *Other services*. Systemd starts the restund docker container. See `ansible-restund role `__. For details about docker container startup `consult the official documentation `__ - -.. note:: - For more information about Virual Machine startup or operating system level service startup, please consult your virtualisation and operating system documentation. - -.. |arch-simplified| image:: img/architecture-server-simplified.png -.. |arch-proto| image:: ./img/architecture-tls-on-prem-2020-09.png -.. |arch-ha| image:: ../how-to/install/img/architecture-server-ha.png - -Focus on pods -~~~~~~~~~~~~~ - -The Wire backend runs in `a kubernetes cluster `__, with different components running in different `pods `__. - -This is a list of those pods as found in a typical installation. - -HTTPS Entry points: - -* ``nginx-ingress-controller-controller``: `Ingress `__ exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. -* ``nginx-ingress-controller-default-backend``: `The default backend `__ is a service which handles all URL paths and hosts the nginx controller doesn't understand (i.e., all the requests that are not mapped with an Ingress), that is 404 pages. Part of ``nginx-ingress``. - -Frontend pods: - -* ``webapp``: The fully functioning Web client (like https://app.wire.com). `This pod `__ serves the web interface itself, which then interfaces with other services/pods, such as the APIs. -* ``account-pages``: `This pod `__ serves Web pages for user account management (a few pages relating to e.g. password reset). -* ``team-settings``: Team management Web interface (like https://teams.wire.com). - -Pods with an HTTP API: - -* ``brig``: `The user management API service `__. Connects to ``cassandra`` and ``elastisearch`` for user data storage, sends emails and SMS for account validation. -* ``cannon``: `WebSockets API Service `__. Holds WebSocket connections. -* ``cargohold``: `Asset Storage API Service `__. Amazon-AWS-S3-style services are used by ``cargohold`` to store encrypted files that users are sharing amongst each other, such as images, files, and other static content, which we call assets. All assets except profile pictures are symmetrically encrypted before storage (and the keys are only known to the participants of the conversation in which an assets was shared - servers have no knowledge of the keys). -* ``galley``: `Conversations and Teams API Service `__. Data is stored in cassandra. Uses ``gundeck`` to send notifications to users. -* ``nginz``: Public API Reverse Proxy (Nginx with custom libzauth module). A modified copy of nginx, compiled with a specific set of upstream extra modules, and one important additional module zauth_nginx_module. Responsible for user authentication validation. Forwards traffic to all other API services (except federator) -* ``spar``: `Single Sign On (SSO) `__ and `SCIM `__. Stores data in cassandra. -* ``gundeck``: Push Notification Hub (WebSocket/mobile push notifications). Uses redis as a temporary data store for websocket presences. Uses Amazon SNS and SQS. -* ``federator``: `Connects different wire installations together `__. Wire Federation, once implemented, aims to allow multiple Wire-server backends to federate with each other. That means that a user 1 registered on backend A and a user 2 registered on backend B should be able to interact with each other as if they belonged to the same backend. - -Supporting pods and data storage: - -* ``cassandra-ephemeral`` (or ``cassandra-external``): `NoSQL Database management system `__ (https://en.wikipedia.org/wiki/Apache_Cassandra). Everything stateful in wire-server (cassandra is used by ``brig``, ``galley``, ``gundeck`` and ``spar``) is stored in cassandra. - * ``cassandra-ephemeral`` is for test clusters where persisting the data (i.e. loose users, conversations,...) does not matter, but this shouldn't be used in production environments. - * ``cassandra-external`` is used to point to an external cassandra cluster which is installed outside of Kubernetes. -* ``demo-smtp``: In "demo" installations, used to replace a proper external SMTP server for the sending of emails (for example verification codes). In production environments, an actual SMTP server is used directly instead of this pod. (https://github.com/namshi/docker-smtp) -* ``fluent-bit``: A log processor and forwarder, allowing collection of data such as metrics and logs from different sources. Not typically deployed. (https://fluentbit.io/) -* ``elastisearch-ephemeral`` (or ``elastisearch-external``): `Distributed search and analytics engines, stores some user information (name, handle, userid, teamid) `__. Information is duplicated here from cassandra to allow searching for users. Information here can be re-populated from data in cassandra (albeit with some downtime for search functionality) (https://www.elastic.co/what-is/elasticsearch). - * ``elastisearch-ephemeral`` is for test clusters where persisting the data doesn't matter. - * ``elastisearch-external`` refers to elasticsearch IPs located outside kubernetes by specifying IPs manually. -* ``fake-aws-s3``: Amazon-AWS-S3-compatible object storage using MinIO (https://min.io/), used by cargohold to store (encrypted) assets such as files, posted images, profile pics, etc. -* ``fake-aws-s3-reaper``: Creates the default S3 bucket inside fake-aws-s3. -* ``fake-aws-sns``. `Amazon Simple Notification Service (Amazon SNS) `__, used to push messages to mobile devices or distributed services. SNS can publish a message once, and deliver it one or more times. -* ``fake-aws-sqs``: `Amazon Simple Queue Service (Amazon SQS) queue `__, used to transmit any volume of data without requiring other services to be always available. -* ``redis-ephemeral``: Stores websocket connection assignments (part of the ``gundeck`` / ``cannon`` architecture). - -Short running jobs that run during installation/upgrade (these should usually be in the status 'Completed' except immediately after installation/upgrade): - -* ``cassandra-migrations``: Used to initialize or upgrade the database schema in cassandra (for example when the software is upgraded to a new version). -* ``galley-migrate-data``: Used to upgrade data in ``cassandra`` when the data model changes (for example when the software is upgraded to a new version). -* ``brig-index-migrate-data``: Used to upgrade data in ``cassandra`` when the data model changes in brig (for example when the software is upgraded to a new version) -* ``elastisearch-index-create``: `Creates `__ an Elastisearch index for brig. -* ``spar-migrate-data``: `Used to update spar data `__ in cassandra when schema changes occur. - -As an example, this is the result of running the ``kubectl get pods --namespace wire`` command to obtain a list of all pods in a typical cluster: - -.. code:: shell - - NAMESPACE NAME READY STATUS RESTARTS AGE - wire account-pages-54bfcb997f-hwxlf 1/1 Running 0 85d - wire brig-58bc7f844d-rp2mx 1/1 Running 0 3h54m - wire brig-index-migrate-data-s7lmf 0/1 Completed 0 3h33m - wire cannon-0 1/1 Running 0 3h53m - wire cargohold-779bff9fc6-7d9hm 1/1 Running 0 3h54m - wire cassandra-ephemeral-0 1/1 Running 0 176d - wire cassandra-migrations-66n8d 0/1 Completed 0 3h34m - wire demo-smtp-784ddf6989-7zvsk 1/1 Running 0 176d - wire elasticsearch-ephemeral-86f4b8ff6f-fkjlk 1/1 Running 0 176d - wire elasticsearch-index-create-l5zbr 0/1 Completed 0 3h34m - wire fake-aws-s3-77d9447b8f-9n4fj 1/1 Running 0 176d - wire fake-aws-s3-reaper-78d9f58dd4-kf582 1/1 Running 0 176d - wire fake-aws-sns-6c7c4b7479-nzfj2 2/2 Running 0 176d - wire fake-aws-sqs-59fbfbcbd4-ptcz6 2/2 Running 0 176d - wire federator-6d7b66f4d5-xgkst 1/1 Running 0 3h54m - wire galley-5b47f7ff96-m9zrs 1/1 Running 0 3h54m - wire galley-migrate-data-97gn8 0/1 Completed 0 3h33m - wire gundeck-76c4599845-4f4pd 1/1 Running 0 3h54m - wire nginx-ingress-controller-controller-2nbkq 1/1 Running 0 9d - wire nginx-ingress-controller-controller-8ggw2 1/1 Running 0 9d - wire nginx-ingress-controller-default-backend-dd5c45cf-jlmbl 1/1 Running 0 176d - wire nginz-77d7586bd9-vwlrh 2/2 Running 0 3h54m - wire redis-ephemeral-master-0 1/1 Running 0 176d - wire spar-8576b6845c-npb92 1/1 Running 0 3h54m - wire spar-migrate-data-lz5ls 0/1 Completed 0 3h33m - wire team-settings-86747b988b-5rt45 1/1 Running 0 50d - wire webapp-54458f756c-r7l6x 1/1 Running 0 3h54m - 1/1 Running 0 3h54m -.. note:: - - This list is not exhaustive, and your installation may have additional pods running depending on your configuration. diff --git a/docs/src/understand/restund.rst b/docs/src/understand/restund.md similarity index 61% rename from docs/src/understand/restund.rst rename to docs/src/understand/restund.md index 35014c28bf..0cb8dd6f6d 100644 --- a/docs/src/understand/restund.rst +++ b/docs/src/understand/restund.md @@ -1,25 +1,22 @@ -.. _understand-restund: +(understand-restund)= -Restund (TURN) servers -======================== +# Restund (TURN) servers -Introduction -~~~~~~~~~~~~ +## Introduction Restund servers allow two users on different networks (for example Alice who is in an office connected to an office router and Bob who is at home connected to a home router) to have a Wire audio or video call. More precisely: - Restund is a modular and flexible - `STUN `__ and - `TURN `__ - Server, with IPv4 and IPv6 support. +> Restund is a modular and flexible +> [STUN](https://en.wikipedia.org/wiki/STUN) and +> [TURN](https://en.wikipedia.org/wiki/Traversal_Using_Relays_around_NAT) +> Server, with IPv4 and IPv6 support. -.. _architecture-restund: +(architecture-restund)= -Architecture -~~~~~~~~~~~~ +## Architecture Since the restund servers help establishing a connection between two users, they need to be reachable by both of these users, which usually @@ -32,29 +29,28 @@ Restund instance may communicate with other Restund instances. You can either have restund servers directly exposed to the public internet: -|architecture-restund| +```{image} img/architecture-restund.png +``` Or you can have them reachable by fronting them with a firewall or load balancer machine that may have a different IP than the server where restund is installed: -|architecture-restund-lb| +```{image} img/architecture-restund-lb.png +``` -What is it used for -~~~~~~~~~~~~~~~~~~~ +## What is it used for Restund is used to assist in NAT-traversal. Its goal is to connect two clients who are (possibly both) behind NAT directly in a peer to peer fashion, for optimal call quality and lowest latency. - client A sends a UDP packet to Restund; which will get address-translated by the router. Restund then sends back to the client what the source IP and the source port was that Restund observed. If the client then communicates this to Client B, Client B will be able to send data to that IP,port pair over UDP if it does so quickly enough. Client A and B will then have a peer-to-peer leg. - This is not always possible (e.g. symmetric NAT makes this technique impossible, as the router will NAT a different source-port for each connection). In that case clients fall back to TURN, which asks Restund to @@ -63,17 +59,16 @@ allocate a relay address which relays packets between nodes A and B. Restund servers need to have a wide range of ports open to allocate such relay addresses. -Network -~~~~~~~ +## Network As briefly mentioned above, a TURN server functions as a bridge between networks. Networks which don't have a direct route defined between them, usually have distinct address blocks. Depending on the address block they are configured with - such block is either considered to be *public* or *private* -(aka special-purpose addresses `[RFC 6890] `__) +(aka special-purpose addresses [\[RFC 6890\]](https://tools.ietf.org/html/rfc6890)) -- `IPv4 private blocks `__ -- `IPv6 private blocks `__ +- [IPv4 private blocks](https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml) +- [IPv6 private blocks](https://www.iana.org/assignments/iana-ipv6-special-registry/iana-ipv6-special-registry.xhtml) In cases where a machine, that is hosting the TURN server, also connects to a *private* network in which other services are running, chances are @@ -81,56 +76,51 @@ that these services are being indirectly exposed through that TURN server. To prevent this kind of exposure, a TURN server has to be configured with an inclusive or exclusive list of address blocks to prevents undesired connections from being -established [1]_. At the moment (Feb. 2021), this functionality is not yet available +established [^footnote-1]. At the moment (Feb. 2021), this functionality is not yet available with *Restund* on the application-level. Instead, the system-level firewall capabilities -must be utilized. The `IP ranges `__ -mentioned in the article [1]_ should be blocked for egress and, depending on the scenario, -also for ingress traffic. Tools like ``iptables`` or ``ufw`` can be used to set this up. - -.. [1] `Details about CVE-2020-26262, bypass of Coturn's default access control protection `__ +must be utilized. The [IP ranges](https://www.rtcsec.com/post/2021/01/details-about-cve-2020-26262-bypass-of-coturns-default-access-control-protection/#further-concerns-what-else) +mentioned in the article [^footnote-1] should be blocked for egress and, depending on the scenario, +also for ingress traffic. Tools like `iptables` or `ufw` can be used to set this up. +[^footnote-1]: [Details about CVE-2020-26262, bypass of Coturn's default access control protection](https://www.rtcsec.com/post/2021/01/details-about-cve-2020-26262-bypass-of-coturns-default-access-control-protection/) -.. _understand-restund-protocal-and-ports: +(understand-restund-protocal-and-ports)= -Protocols and open ports -~~~~~~~~~~~~~~~~~~~~~~~~ +## Protocols and open ports Restund servers provide the best audio/video connections if end-user devices -can connect to them via UDP. +can connect to them via UDP. -In this case, a firewall (if any) needs to allow and/or forward the complete :ref:`default port range ` for incoming UDP traffic. +In this case, a firewall (if any) needs to allow and/or forward the complete {ref}`default port range ` for incoming UDP traffic. -Ports for allocations are allocated from the :ref:`default port range `, for more information on this port range, how to read and change it, and how to configure your firewall, see :ref:`this note `. +Ports for allocations are allocated from the {ref}`default port range `, for more information on this port range, how to read and change it, and how to configure your firewall, see {ref}`this note `. -In case e.g. office firewall rules disallow UDP traffic in this range, there is a possibility to use TCP instead, at the expense of call quality. +In case e.g. office firewall rules disallow UDP traffic in this range, there is a possibility to use TCP instead, at the expense of call quality. -Port ``3478`` is the default control port, +Port `3478` is the default control port, however one UDP port per active connection is required, so a whole port range must be available and reachable from the outside. -If *Conference Calling 2.0* (:ref:`SFT `) is enabled, a Restund instance, -additionally, must be allowed to communicate with ::ref:`SFT instances ` +If *Conference Calling 2.0* ({ref}`SFT `) is enabled, a Restund instance, +additionally, must be allowed to communicate with :{ref}`SFT instances ` on the same UDP ports mentioned above. In this scenario a Restund server becomes sort of a proxy for the client, if the client is not able to establish a media channel between itself and the SFT server. -*For more information, please refer to the source code of the Ansible role:* `restund `__. +*For more information, please refer to the source code of the Ansible role:* [restund](https://github.com/wireapp/ansible-restund/blob/master/tasks/firewall.yml). -Control ports -^^^^^^^^^^^^^ +### Control ports -Restund listens for control messages on port ``3478`` on both UDP and TCP. It -also can listen on port ``5349`` which uses TLS. One can reconfigure both ports. -For example, port ``5349`` can be reconfigured to be port ``443``; so that TURN +Restund listens for control messages on port `3478` on both UDP and TCP. It +also can listen on port `5349` which uses TLS. One can reconfigure both ports. +For example, port `5349` can be reconfigured to be port `443`; so that TURN traffic can not be distinguished from any other TLS traffic. This might help with overcoming certain firewall restrictions. You can instead use (if that's -easier with firewall rules) for example ports ``80`` and ``443`` (requires to +easier with firewall rules) for example ports `80` and `443` (requires to run restund as root) or do a redirect from a load balancer (if using one) to -redirect ``443 -> 5349`` and ``80 -> 3478``. +redirect `443 -> 5349` and `80 -> 3478`. - -Amount of users and file descriptors -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +## Amount of users and file descriptors Each allocation (active connection by one participant) requires 1 or 2 file descriptors, so ensure you increase your file descriptor limits in @@ -140,33 +130,27 @@ Currently one restund server can have a maximum of 64000 allocations. If you have more users than that in an active call, you need to deploy more restund servers. -Load balancing and high-availability -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +## Load balancing and high-availability Load balancing is not possible, since STUN/TURN is a stateful protocol, -so UDP packets addressed to ``restund server 1``, if by means of a load -balancer were to end up at ``restund server 2``, would get dropped, as +so UDP packets addressed to `restund server 1`, if by means of a load +balancer were to end up at `restund server 2`, would get dropped, as the second server doesn't know the source address. High-availability is nevertheless ensured by having and advertising more than one restund server. Instead of the load balancer, the clients will switch their server if it fails. -Discovery and establishing a call -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +## Discovery and establishing a call A simplified flow of how restund servers, along with the wire-server are used to establish a call: -|flow-restund| +```{image} img/flow-restund.png +``` -DNS -~~~ +## DNS Usually DNS records are used which point to the public IPs of the restund servers (or of the respective firewall or load balancer machines). These DNS names are then used when configuring wire-server. - -.. |architecture-restund| image:: img/architecture-restund.png -.. |architecture-restund-lb| image:: img/architecture-restund-lb.png -.. |flow-restund| image:: img/flow-restund.png diff --git a/docs/src/understand/sft.rst b/docs/src/understand/sft.md similarity index 77% rename from docs/src/understand/sft.rst rename to docs/src/understand/sft.md index aec41fe742..28f2c432d6 100644 --- a/docs/src/understand/sft.rst +++ b/docs/src/understand/sft.md @@ -1,82 +1,76 @@ -.. _understand-sft: +(understand-sft)= -Conference Calling 2.0 (aka SFT) -================================ +# Conference Calling 2.0 (aka SFT) -Background ----------- +## Background Previously, Wire group calls were implemented as a mesh, where each participant was connected to each other in a peer-to-peer fashion. This meant that a client would have to upload their video and audio feeds separately for each participant. This in practice meant that the amount of participants was limited by the upload bandwidth of the clients. -Wire now has a signalling-forwarding unit called `SFT `__ which allows clients to upload once and +Wire now has a signalling-forwarding unit called [SFT](https://github.com/wireapp/wire-avs-service) which allows clients to upload once and then the SFT fans it out to the other clients. Because connections are not end-to-end anymore now, dTLS encryption offered by WebRTC is not enough anymore as the encryption is terminated at the server-side. To avoid Wire from seeing the contents of calls SFT utilises WebRTC InsertibleStreams to encrypt the packets a second time with a group key that is not known to the server. With SFT it is thus possible to have conference calls with many participants without compromising end-to-end security. -.. note:: - We will describe conferencing first in a single domain in this section. - Conferencing in an environment with Federation is described in the - :ref:`federated conferencing` section. +```{note} +We will describe conferencing first in a single domain in this section. +Conferencing in an environment with Federation is described in the +{ref}`federated conferencing` section. +``` - -Architecture ------------- +## Architecture The following diagram is centered around SFT and its role within a calling setup. Restund is seen as a mere client proxy and its relation to and interaction with a client is explained -:ref:`here `. The diagram shows that a call resides on a single SFT instance +{ref}`here `. The diagram shows that a call resides on a single SFT instance and that the instance allocates at least one port for media transport per participant in the call. -.. figure:: img/architecture-sft.png - - SFT signaling, and media sending from the perspective of one caller +```{figure} img/architecture-sft.png +SFT signaling, and media sending from the perspective of one caller +``` - -Establishing a call -------------------- +## Establishing a call 1. *Client A* wants to initiate a call. It contacts all the known SFT servers via HTTPS. The SFT server that is quickest to respond is the one that will be used by the client. - (Request 1: ``CONFCONN``) + (Request 1: `CONFCONN`) 2. *Client A* gathers connection candidates (own public IP, public IP of the network the - client is in with the help of STUN, through TURN servers) [1]_ for the SFT server to + client is in with the help of STUN, through TURN servers) [^footnote-1] for the SFT server to establish a media connection to *Client A*. These information are then being send again - from *Client A* to the chosen SFT server via HTTPS request. (Request 2: ``SETUP``) + from *Client A* to the chosen SFT server via HTTPS request. (Request 2: `SETUP`) 3. The SFT server tests which of the connection candidates actually work. Meaning, it goes through all the candidates until one leads to a successful media connection between itself and *client A* -4. *Client A* sends an end-to-end encrypted message [2]_ ``CONFSTART`` to all members of chat, which contains +4. *Client A* sends an end-to-end encrypted message [^footnote-2] `CONFSTART` to all members of chat, which contains the URL of the SFT server that is being used for the call. 5. Any other client that wants to join the call, does 1. + 2. with the exception of **only** contacting one SFT server i.e. the one that *client A* chose and told all other - potential participants about via ``CONFSTART`` message + potential participants about via `CONFSTART` message At that point a media connection between *client A* and the SFT server has been established, and they continue talking to each other by using the data-channel, which uses the media connection (i.e. no more HTTPS at that point). There are just 2 HTTPS request/response sequences per participant. -.. [1] STUN & TURN are both part of a :ref:`Restund server ` -.. [2] This encrypted message is sent in the same conversation, hidden from user's view but - interpreted by user's clients. It is sent via backend servers and forwarded to other - conversation participants, not to or via SFT. +[^footnote-1]: STUN & TURN are both part of a {ref}`Restund server ` +[^footnote-2]: This encrypted message is sent in the same conversation, hidden from user's view but + interpreted by user's clients. It is sent via backend servers and forwarded to other + conversation participants, not to or via SFT. -Prerequisites -------------- +## Prerequisites For Conference Calling to function properly, clients need to be able to reach the HTTPS interface of the SFT server(s) - either directly or through a load balancer sitting in front of the servers. This is only needed for the call initiation/joining part. Additionally, for the media connection, clients and SFT servers should be able to reach each other -via UDP (see :ref:`Firewall rules `). +via UDP (see {ref}`Firewall rules `). If that is not possible, then at least SFT servers and Restund servers should be able to reach each other via UDP - and clients may connect via UDP and/or TCP to Restund servers -(see :ref:`Protocols and open ports `), which in +(see {ref}`Protocols and open ports `), which in turn will connect to the SFT server. In the unlikely scenario where no UDP is allowed whatsoever or SFT servers may not be able to reach the Restund servers that clients are using to make themselves reachable, an SFT server itself can @@ -90,19 +84,17 @@ Due to this `hostNetwork` limitation only one SFT instance can run per node so i As a rule of thumb you will need 1vCPU of compute per 50 participants. SFT will utilise multiple cores. You can use this rule of thumb to decide how many kubernetes nodes you need to provision. -For more information about capacity planning and networking please refer to the `technical documentation `__ +For more information about capacity planning and networking please refer to the [technical documentation](https://github.com/wireapp/wire-server/blob/eab0ce1ff335889bc5a187c51872dfd0e78cc22b/charts/sftd/README.md) -.. _federated-sft: +(federated-sft)= -Federated Conference Calling -============================ +# Federated Conference Calling -Conferencing in a federated environment assumes that each domain participating in a +Conferencing in a federated environment assumes that each domain participating in a conference will use an SFT in its own domain. The SFT in the caller's domain is called -the `anchor SFT`. +the `anchor SFT`. -Multi-SFT Architecture ----------------------- +## Multi-SFT Architecture With support for federation, each domain participating in a conference is responsible to make available an SFT for users in that domain. The SFT in the domain of the caller is @@ -116,7 +108,7 @@ initiates a call in a federated conversation which contains herself, Adam also i A, and Bob and Beth in domain B. Alice's client first creates a conference and is assigned a conference URL on SFT A2. Because the SFT is configured for federation, it assumes the role of anchor and also returns an IP address and port (the `anchor SFT tuple`) -which can be used by any federated SFTs which need to connect. (Alice sets up her media +which can be used by any federated SFTs which need to connect. (Alice sets up her media connection with SFT A2 as normal). Alice's client forwards the conference URL and the anchor SFT tuple to the other @@ -128,9 +120,9 @@ to the anchor SFT using the anchor SFT tuple and provides the SFT URL. (Bob's cl also sets up media with SFT B1 normally.) At this point all paths are established and the conference call can happen normally. -.. figure:: img/multi-sft-noturn.png - - Basic Multi-SFT conference initiated by Alice in domain A, with Bob in domain B +```{figure} img/multi-sft-noturn.png +Basic Multi-SFT conference initiated by Alice in domain A, with Bob in domain B +``` Because some customers do not wish to expose their SFTs directly to hosts on the public Internet, the SFTs can allocate a port on a TURN server. In this way, only the IP @@ -140,16 +132,16 @@ this scenario. In this configuration, SFT A2 requests an allocation from the fe TURN server in domain A before responding to Alice. The anchor SFT tuple is the address allocated on the federation TURN server in domain A. -.. figure:: img/multi-sft-turn.png - - Multi-SFT conference with TURN servers between federated SFTs +```{figure} img/multi-sft-turn.png +Multi-SFT conference with TURN servers between federated SFTs +``` Finally, for extremely restrictive firewall environments, the TURN servers used for federated SFT traffic can be further secured with a TURN to TURN mutually authenticated DTLS connection. The SFTs allocate a channel inside this DTLS connection per conference. The channel number is included along with the anchor SFT tuple returned to Alice, which Alice shares with the conversation, which Bob sends to SFT B1, -and which SFT B1 uses when forming its DTLS connection to SFT A2. This DTLS connection +and which SFT B1 uses when forming its DTLS connection to SFT A2. This DTLS connection runs on a dedicated port number which is not used for regular TURN traffic. Under this configuration, only that single IP address and port is exposed for each federated TURN server with all SFT traffic multiplexed over the connection. The diagram below shows @@ -157,7 +149,6 @@ this scenario. Note that this TURN DTLS multiplexing is only used for SFT to SF communication into federated group calls, and does not affect the connectivity requirements for normal one-on-one calls. -.. figure:: img/multi-sft-turn-dtls.png - - Multi-SFT conference with federated TURN servers with DTLS multiplexing - +```{figure} img/multi-sft-turn-dtls.png +Multi-SFT conference with federated TURN servers with DTLS multiplexing +``` diff --git a/docs/src/understand/single-sign-on/design.rst b/docs/src/understand/single-sign-on/design.rst deleted file mode 100644 index af2102e363..0000000000 --- a/docs/src/understand/single-sign-on/design.rst +++ /dev/null @@ -1,3 +0,0 @@ -:orphan: - -This page is gone. Please visit `this one <./main.html>`_ diff --git a/docs/src/understand/single-sign-on/main.rst b/docs/src/understand/single-sign-on/main.rst deleted file mode 100644 index 8603a8fd71..0000000000 --- a/docs/src/understand/single-sign-on/main.rst +++ /dev/null @@ -1,560 +0,0 @@ - -Single sign-on and user provisioning ------------------------------------- - -.. contents:: - -Introduction -~~~~~~~~~~~~ - -This page is intended as a manual for administrator users in need of setting up :term:`SSO` and provisionning users using :term:`SCIM` on their installation of Wire. - -Historically and by default, Wire's user authentication method is via phone or password. This has security implications and does not scale. - -Solution: :term:`SSO` with :term:`SAML`! `(Security Assertion Markup Language) `_ - -:term:`SSO` systems allow users to identify on multiple systems (including Wire once configured as such) using a single ID and password. - -You can find some of the advantages of :term:`SSO` over more traditional schemes `here `_. - -Also historically, wire has allowed team admins and owners to manage their users in the team management app. - -This does not scale as it requires a lot of manual labor for each user. - -The solution we offer to solve this issue is implementing :term:`SCIM` `(System for Cross-domain Identity Management) `_ - -:term:`SCIM` is an interface that allows both software (for example Active Directory) and custom scripts to manage Identities (users) in bulk. - -This page explains how to set up :term:`SCIM` and then use it. - -.. note:: - Note that it is recommended to use both :term:`SSO` and :term:`SCIM` (as opposed to just :term:`SSO` alone). - The reason is if you only use :term:`SSO`, but do not configure/implement :term:`SCIM`, you will experience reduced functionality. - In particular, without :term:`SCIM` all Wire users will be named according their e-mail address and won't have any rich profiles. - See below in the :term:`SCIM` section for a more detailled explanation. - - -Further reading -~~~~~~~~~~~~~~~ - -If you can't find the answers to your questions here, we have a few -more documents. Some of them are very technical, some may not be up -to date any more, and we are planning to move many of them into this -page. But for now they may be worth checking out. - -- :ref:`Trouble shooting & FAQ ` -- https://support.wire.com/hc/en-us/sections/360000580658-Authentication -- https://github.com/wireapp/wire-server/blob/1753b790e5cfb2d35e857648c88bcad3ac329f01/docs/reference/spar-braindump.md -- https://github.com/wireapp/wire-server/tree/1753b790e5cfb2d35e857648c88bcad3ac329f01/docs/reference/provisioning/ - - -Definitions -~~~~~~~~~~~ - -The following concepts need to be understood to use the present manual: - -.. glossary:: - - SCIM - System for Cross-domain Identity Management (:term:`SCIM`) is a standard for automating the exchange of user identity information between identity domains, or IT systems. - - One example might be that as a company onboards new employees and separates from existing employees, they are added and removed from the company's electronic employee directory. :term:`SCIM` could be used to automatically add/delete (or, provision/de-provision) accounts for those users in external systems such as G Suite, Office 365, or Salesforce.com. Then, a new user account would exist in the external systems for each new employee, and the user accounts for former employees might no longer exist in those systems. - - See: `System for Cross-domain Identity Management at Wikipedia `_ - - In the context of Wire, SCIM is the interface offered by the Wire service (in particular the spar service) that allows for single or mass automated addition/removal of user accounts. - - SSO - - Single sign-on (:term:`SSO`) is an authentication scheme that allows a user to log in with a single ID and password to any of several organizationally related, yet independent, software systems. - - True single sign-on allows the user to log in once and access different, independent services without re-entering authentication factors. - - See: `Single-Sign-On at Wikipedia `_ - - SAML - - Security Assertion Markup Language (:term:`SAML`, pronounced SAM-el, /'sæməl/) is an open standard for exchanging authentication and authorization data between parties, in particular, between an identity provider and a service provider. :term:`SAML` is an XML-based markup language for security assertions (statements that service providers use to make access-control decisions). :term:`SAML` is also: - - * A set of XML-based protocol messages - * A set of protocol message bindings - * A set of profiles (utilizing all of the above) - - An important use case that :term:`SAML` addresses is web-browser `single sign-on (SSO) `_ . Single sign-on is relatively easy to accomplish within a security domain (using cookies, for example) but extending :term:`SSO` across security domains is more difficult and resulted in the proliferation of non-interoperable proprietary technologies. The `SAML Web Browser SSO `_ profile was specified and standardized to promote interoperability. - - See: `SAML at Wikipedia `_ - - In the context of Wire, SAML is the standard/protocol used by the Wire services (in particular the spar service) to provide the Single Sign On feature. - - IdP - - In the context of Wire, an identity provider (abbreviated :term:`IdP`) is a service that provides SAML single sign-on (:term:`SSO`) credentials that give users access to Wire. - - Curl - - :term:`Curl` (pronounced ":term:`Curl`") is a command line tool used to download files over the HTTP (web) protocol. For example, `curl http://wire.com` will download the ``wire.com`` web page. - - In this manual, it is used to contact API (Application Programming Interface) endpoints manually, where those endpoints would normally be accessed by code or other software. - - This can be used either for illustrative purposes (to "show" how the endpoints can be used) or to allow the manual execution of some simple tasks. - - For example (not a real endpoint) `curl http://api.wire.com/delete_user/thomas` would (schematically) execute the :term:`Curl` command, which would contact the wire.com API and delete the user named "thomas". - - Running this command in a terminal would cause the :term:`Curl` command to access this URL, and the API at that URL would execute the requested action. - - See: `curl at Wikipedia `__ - - - Spar - - The Wire backend software stack is composed of different services, `running as pods <../overview.html#focus-on-pods>`__ in a kubernetes cluster. - - One of those pods is the "spar" service. That service/pod is dedicated to the providing :term:`SSO` (using :term:`SAML`) and :term:`SCIM` services. This page is the manual for this service. - - In the context of :term:`SCIM`, Wire's spar service is the `Service Provider `__ that Identity Management Software - (for example Azure, Okta, Ping Identity, SailPoint, Technology Nexus, etc.) uses for user account provisioning and deprovisioning. - -User login for the first time with SSO -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -:term:`SSO` allows users to register and log into Wire with their company credentials that they use on other software in their workplace. -No need to remember another password. - -When a team is set up on Wire, the administrators can provide users a login code or link that they can use to go straight to their company's login page. - -Here is what this looks from a user's perspective: - -1. Download Wire. -2. Select and copy the code that your company gave you / the administrator generated -3. Open Wire. Wire may detect the code on your clipboard and open a pop-up window with a text field. - Wire will automatically put the code into the text field. - If so, click Log in and go to step 8. -4. If no pop-up: click Login on the first screen. -5. Click Enterprise Login. -6. A pop-up will appear. In the text field, paste or type the code your company gave you. -7. Click Log in. -8. Wire will load your company's login page: log in with your company credentials. - - -SAML/SSO -~~~~~~~~ - -Introduction -^^^^^^^^^^^^ - -SSO (Single Sign-On) is technology allowing users to sign into multiple services with a single identity provider/credential. - -SSO is about `authentication`, not `provisioning` (create, update, remove user accounts). To learn more about the latter, continue `below `_. - -For example, if a company already has SSO setup for some of their services, and they start using Wire, they can use Wire's SSO support to add Wire to the set of services their users will be able to sign into with their existing SSO credentials. - -Here is a blog post we like about how SAML works: https://duo.com/blog/the-beer-drinkers-guide-to-saml - -And here is a diagram that explains it in slightly more technical terms: - -.. image:: Wire_SAML_Flow.png - -Here is a critique of XML/DSig security (which SAML relies on): https://www.cs.auckland.ac.nz/~pgut001/pubs/xmlsec.txt - -Terminology and concepts -^^^^^^^^^^^^^^^^^^^^^^^^ - -* End User / Browser: The end user is generally a human, an Application (Wire Client) or a browser (agent) who accesses the Service Provider to get access to a service or a protected resource. - The browser carrries out all the redirections from the SP to the IdP and vice versa. -* Service Provider (SP): The entity (here Wire software) that provides its protected resource when an end user tries to access this resource. To accomplish the SAML based SSO authentication, the Service Provider - must have the Identity Provider's metadata. -* Identity Provider (IdP): Defines the entity that provides the user identities, including the ability to authenticate a user to get access to a protected resource / application from a Service Provider. To accomplish - the SAML based SSO authentication, the IdP must have the Service Provider's metadata. -* SAML Request: This is the authentication request generated by the Service Provider to request an authentication from the Identity Provider for verifying the user's identity. -* SAML Response: The SAML Response contains the cryptographically signed assertion of the authenticated user and is generated by the Identity Provider. - -(Definitons adapted from `collab.net `_) - -.. _Setting up SSO externally: - -Setting up SSO externally -^^^^^^^^^^^^^^^^^^^^^^^^^ - -To set up :term:`SSO` for a given Wire installation, the Team owner/administrator must enable it. - -The first step is to configure the Identity Provider: you'll need to register Wire as a service provider in your Identity Provider. - -We've put together guides for registering with different providers: - -.. toctree:: - :maxdepth: 1 - - Instructions for Okta <../../how-to/single-sign-on/okta/main.rst> - Instructions for Centrify <../../how-to/single-sign-on/centrify/main.rst> - Instructions for Azure <../../how-to/single-sign-on/azure/main.rst> - Some screenshots for ADFS <../../how-to/single-sign-on/adfs/main.rst> - Generic instructions (try this if none of the above are applicable) <../../how-to/single-sign-on/generic-setup.rst> - Trouble shooting & FAQ <../../how-to/single-sign-on/trouble-shooting.rst> - -As you do this, make sure you take note of your :term:`IdP` metadata, which you will need for the next step. - -Once you are finished with registering Wire to your :term:`IdP`, move on to the next step, setting up :term:`SSO` internally. - -Setting up SSO internally -^^^^^^^^^^^^^^^^^^^^^^^^^ - -Now that you've registered Wire with your identity provider (:term:`IdP`), you can enable :term:`SSO` for your team on Wire. - -On Desktop: - -* Click Settings and click "Manage Team"; or go directly to teams.wire.com, or if you have an on-premise install, go to teams..com -* Login with your account credentials. -* Click "Customization". Here you will see the section for :term:`SSO`. -* Click the blue down arrow. -* Click "Add :term:`SAML` Connection". -* Provide the :term:`IdP` metadata. To find out more about retrieving this for your provider, see the guides in the "Setting up :term:`SSO` externally" step just above. -* Click "Save". -* Wire will now validate the document to set up the :term:`SAML` connection. -* If the data is valid, you will return to the Settings page. -* The page shows the information you need to log in with :term:`SSO`. Copy the login code or URL and send it to your team members or partners. For more information see: Logging in with :term:`SSO`. - -What to expect after :term:`SSO` is enabled: - -Anyone with a login through your :term:`SAML` identity provider (:term:`IdP`) and with access to the Wire app will be able to register and log in to your team using the :term:`SSO` Login URL and/or Code. - -Take care to share the code only with members of your team. - -If you haven't set up :term:`SCIM` (`we recommend you do <#introduction>`_), your team members can create accounts on Wire using :term:`SSO` simply by logging in, and will appear on the People tab of the team management page. - -If team members already have Wire accounts, use :term:`SCIM` to associate them with the :term:`SAML` credentials. If you make a mistake here, you may end up with several accounts for the same person. - -.. _User provisioning: - -User provisioning (SCIM/LDAP) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -SCIM/LDAP is about `provisioning` (create, update, remove user accounts), not `authentication`. To learn more about the latter, continue `above `_. - -Wire supports the `SCIM `__ (`RFC 7643 `__) protocol to create, update and delete users. - -If your user data is stored in an LDAP data source like Active Directory or OpenLDAP, you can use our docker-base `ldap-scim-bridge `__ to connect it to wire. - -Note that connecting a SCIM client to Wire also disables the functionality to create new users in the SSO login process. This functionality is disabled when a token is created (see below) and re-enabled when all tokens have been deleted. - -To set up the connection of your SCIM client (e.g. Azure Active Directory) you need to provide - -1. The URL under which Wire's SCIM API is hosted: ``https://prod-nginz-https.wire.com/scim/v2``. - If you are hosting your own instance of Wire then the URL is ``https:///scim/v2``, where ```` is where you are serving Wire's public endpoints. Some SCIM clients append ``/v2`` to the URL your provide. If this happens (check the URL mentioned in error messages of your SCIM client) then please provide the URL without the ``/v2`` suffix, i.e. ``https://prod-nginz-https.wire.com/scim`` or ``https:///scim``. - -2. A secret token which authorizes the use of the SCIM API. Use the `wire_scim_token.py `__ - script to generate a token. To run the script you need access to an user account with "admin" privileges that can login via email and password. Note that the token is independent from the admin account that created it, i.e. the token remains valid if the admin account gets deleted or changed. - -You need to configure your SCIM client to use the following mandatory SCIM attributes: - -1. Set the ``userName`` attribute to the desired user handle (the handle is shown - with an @ prefix in apps). It must be unique accross the entire Wire Cloud - (or unique on your own instance), and consist of the characters ``a-z0-9_.-`` - (no capital letters). - -2. Set the ``displayName`` attribute to the user's desired display name, e.g. "Jane Doe". - It must consist of 1-128 unicode characters. It does not need to be unique. - -3. The ``externalId`` attribute: - - a. If you are using Wire's SAML SSO feature then set ``externalId`` attribute to the same identifier used for ``NameID`` in your SAML configuration. - - b. If you are using email/password authentication then set the ``externalId`` - attribute to the user's email address. The user will receive an invitation email during provisioning. Also note that the account will be set to ``"active": false`` until the user has accepted the invitation and activated the account. - -You can optionally make use of Wire's ``urn:wire:scim:schemas:profile:1.0`` extension field to store arbitrary user profile data that is shown in the users profile, e.g. department, role. See `docs `__ for details. - -SCIM management in Wire (in Team Management) -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -SCIM security and authentication -'''''''''''''''''''''''''''''''' - -Wire uses a very basic variant of oauth, where a *bearer token* is presented to the server in header with all :term:`SCIM` requests. - -You can create such bearer tokens in team management and copy them from there into your the dashboard of your SCIM data source. - -Generating a SCIM token -''''''''''''''''''''''' - -In order to be able to send SCIM requests to Wire, we first need to generate a SCIM token. This section explains how to do this. - -Once the token is generated, it should be noted/remembered, and it will be used in all subsequent SCIM uses/requests to authenticate the request as valid/authenticated. - -These are the steps to generate a new :term:`SCIM` token, which you will need to provide to your identity provider (:term:`IdP`), along with the target API URL, to enable :term:`SCIM` provisionning. - -* Step 1: Go to https://teams.wire.com/settings (Here replace "wire.com" with your own domain if you have an on-premise installation of Wire). - -.. image:: token-step-01.png - :align: center - -* Step 2: In the left menu, go to "Customization". - -.. image:: token-step-02.png - :align: center - -* Step 3: Go to "Automated User Management (:term:`SCIM`)" and click the "down" to expand - -.. image:: token-step-03.png - :align: center - -* Step 4: Click "Generate token", if your password is requested, enter it. - -.. image:: token-step-04.png - :align: center - -* Step 5: Once the token is generated, copy it into your clipboard and store it somewhere safe (eg., in the dashboard of your SCIM data source). - -.. image:: token-step-05.png - :align: center - -* Step 6: You're done! You can now view token information, delete the token, or create more tokens should you need them. - -.. image:: token-step-06.png - :align: center - -Tokens are now listed in this :term:`SCIM`-related area of the screen, you can generate up to 8 such tokens. - -Using SCIM via Curl -^^^^^^^^^^^^^^^^^^^ - -You can use the term:`Curl` command line HTTP tool to access tho wire backend (in particular the ``spar`` service) through the :term:`SCIM` API. - -This can be helpful to write your own tooling to interface with wire. - -Creating a SCIM token -''''''''''''''''''''' - -Before we can send commands to the :term:`SCIM` API/Spar service, we need to be authenticated. This is done through the creation of a :term:`SCIM` token. - -First, we need a little shell environment. Run the following in your terminal/shell: - -.. code-block:: bash - :linenos: - - export WIRE_BACKEND=https://prod-nginz-https.wire.com - export WIRE_ADMIN=... - export WIRE_PASSWD=... - -Wire's SCIM API currently supports a variant of HTTP basic auth. - -In order to create a token in your team, you need to authenticate using your team admin credentials. - -The way this works behind the scenes in your browser or cell phone, and in plain sight if you want to use curl, is you need to get a Wire token. - -First install the ``jq`` command (https://stedolan.github.io/jq/): - -.. code-block:: bash - - sudo apt install jq - -.. note:: - - If you don't want to install ``jq``, you can just call the ``curl`` command and copy the access token into the shell variable manually. - -Then run: - -.. code-block:: bash - :linenos: - - export BEARER=$(curl -X POST \ - --header 'Content-Type: application/json' \ - --header 'Accept: application/json' \ - -d '{"email":"'"$WIRE_ADMIN"'","password":"'"$WIRE_PASSWD"'"}' \ - $WIRE_BACKEND/login'?persist=false' | jq -r .access_token) - -This token will be good for 15 minutes; after that, just repeat the command above to get a new token. - -.. note:: - SCIM requests are authenticated with a SCIM token, see below. SCIM tokens and Wire tokens are different things. - - A Wire token is necessary to get a SCIM token. SCIM tokens do not expire, but need to be deleted explicitly. - -You can test that you are logged in with the following command: - -.. code-block:: bash - - curl -X GET --header "Authorization: Bearer $BEARER" $WIRE_BACKEND/self - -Now you are ready to create a SCIM token: - -.. code-block:: bash - :linenos: - - export SCIM_TOKEN_FULL=$(curl -X POST \ - --header "Authorization: Bearer $BEARER" \ - --header 'Content-Type: application/json;charset=utf-8' \ - -d '{ "description": "test '"`date`"'", "password": "'"$WIRE_PASSWD"'" }' \ - $WIRE_BACKEND/scim/auth-tokens) - export SCIM_TOKEN=$(echo $SCIM_TOKEN_FULL | jq -r .token) - export SCIM_TOKEN_ID=$(echo $SCIM_TOKEN_FULL | jq -r .info.id) - -The SCIM token is now contained in the ``SCIM_TOKEN`` environment variable. - -You can look it up again with: - -.. code-block:: bash - :linenos: - - curl -X GET --header "Authorization: Bearer $BEARER" \ - $WIRE_BACKEND/scim/auth-tokens - -And you can delete it with: - -.. code-block:: bash - :linenos: - - curl -X DELETE --header "Authorization: Bearer $BEARER" \ - $WIRE_BACKEND/scim/auth-tokens?id=$SCIM_TOKEN_ID - -Using a SCIM token to Create Read Update and Delete (CRUD) users -'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' - -Now that you have your SCIM token, you can use it to talk to the SCIM API to manipulate (create, read, update, delete) users, either individually or in bulk. - -**JSON encoding of SCIM Users** - -In order to manipulate users using commands, you need to specify user data. - -A minimal definition of a user is written in JSON format and looks like this: - -.. code-block:: json - :linenos: - - { - "schemas" : ["urn:ietf:params:scim:schemas:core:2.0:User"], - "externalId" : "nick@example.com", - "userName" : "nick", - "displayName" : "The Nick" - } - -You can store it in a variable using this sort of command: - -.. code-block:: bash - :linenos: - - export SCIM_USER='{ - "schemas" : ["urn:ietf:params:scim:schemas:core:2.0:User"], - "externalId" : "nick@example.com", - "userName" : "nick", - "displayName" : "The Nick" - }' - -The ``externalId`` is used to construct a SAML identity. Two cases are -currently supported: - -1. ``externalId`` contains a valid email address. - The SAML ``NameID`` has the form ``me@example.com``. -2. ``externalId`` contains anything that is *not* an email address. - The SAML ``NameID`` has the form ``...``. - -.. note:: - - It is important to configure your SAML provider to use ``nameid-format:emailAddress`` or ``nameid-format:unspecified``. Other nameid formats are not supported at this moment. - - See `FAQ `_ - -We also support custom fields that are used in rich profiles in this form (see: https://github.com/wireapp/wire-server/blob/develop/docs/reference/user/rich-info.md): - -.. code-block:: bash - :linenos: - - export SCIM_USER='{ - "schemas" : ["urn:ietf:params:scim:schemas:core:2.0:User", "urn:wire:scim:schemas:profile:1.0"], - "externalId" : "rnick@example.com", - "userName" : "rnick", - "displayName" : "The Rich Nick", - "urn:wire:scim:schemas:profile:1.0": { - "richInfo": [ - { - "type": "Department", - "value": "Sales & Marketing" - }, - { - "type": "Favorite color", - "value": "Blue" - } - ] - } - }' - -**How to create a user** - -You can create a user using the following command: - -.. code-block:: bash - :linenos: - - export STORED_USER=$(curl -X POST \ - --header "Authorization: Bearer $SCIM_TOKEN" \ - --header 'Content-Type: application/json;charset=utf-8' \ - -d "$SCIM_USER" \ - $WIRE_BACKEND/scim/v2/Users) - export STORED_USER_ID=$(echo $STORED_USER | jq -r .id) - -Note that ``$SCIM_USER`` is in the JSON format and is declared before running this commend as described in the section above. - -**Get a specific user** - -.. code-block:: bash - :linenos: - - curl -X GET \ - --header "Authorization: Bearer $SCIM_TOKEN" \ - --header 'Content-Type: application/json;charset=utf-8' \ - $WIRE_BACKEND/scim/v2/Users/$STORED_USER_ID - -**Search a specific user** - -SCIM user search is quite flexible. Wire currently only supports lookup by wire handle or email address. - -Email address (and/or SAML NameID, if /a): - -.. code-block:: bash - :linenos: - - curl -X GET \ - --header "Authorization: Bearer $SCIM_TOKEN" \ - --header 'Content-Type: application/json;charset=utf-8' \ - $WIRE_BACKEND/scim/v2/Users/'?filter=externalId%20eq%20%22me%40example.com%22' - -Wire handle: same request, just replace the query part with - -.. code-block:: bash - - '?filter=userName%20eq%20%22me%22' - -**Update a specific user** - -For each put request, you need to provide the full json object. All omitted fields will be set to ``null``. (If you do not have an up-to-date user present, just ``GET`` one right before the ``PUT``.) - -.. code-block:: bash - :linenos: - - export SCIM_USER='{ - "schemas" : ["urn:ietf:params:scim:schemas:core:2.0:User"], - "externalId" : "rnick@example.com", - "userName" : "newnick", - "displayName" : "The New Nick" - }' - -.. code-block:: bash - :linenos: - - curl -X PUT \ - --header "Authorization: Bearer $SCIM_TOKEN" \ - --header 'Content-Type: application/json;charset=utf-8' \ - -d "$SCIM_USER" \ - $WIRE_BACKEND/scim/v2/Users/$STORED_USER_ID - -**Deactivate user** - -It is possible to temporarily deactivate an user (and reactivate him later) by setting his ``active`` property to ``true/false`` without affecting his device history. (`active=false` changes the wire user status to `suspended`.) - -**Delete user** - -.. code-block:: bash - :linenos: - - curl -X DELETE \ - --header "Authorization: Bearer $SCIM_TOKEN" \ - $WIRE_BACKEND/scim/v2/Users/$STORED_USER_ID diff --git a/hack/bin/oauth_test.sh b/hack/bin/oauth_test.sh new file mode 100755 index 0000000000..d2c891c245 --- /dev/null +++ b/hack/bin/oauth_test.sh @@ -0,0 +1,82 @@ +#!/usr/bin/env bash + +set -e + +USAGE="This script tests the OAuth2 flow by creating a client, requesting an authorization code, and +then requesting an access token. It then uses the access token to make a request to /self. + +Create a user first with './create_test_user.sh -n 1 -c'. Then use the user ID to call this script. + +USAGE: $0 + -u : User ID +" + +unset -v USER + +while getopts ":u:" opt; do + case ${opt} in + u) + USER="$OPTARG" + ;; + \?) + echo "$USAGE" 1>&2 + exit 1 + ;; + :) + echo "-$OPTARG" requires an argument 1>&2 + exit 1 + ;; + esac +done +shift $((OPTIND - 1)) + +if [ -z "$USER" ]; then + echo 'missing option -u ' 1>&2 + echo "$USAGE" 1>&2 + exit 1 +fi + +SCOPE="self:read" + +CLIENT=$( + curl -s -X POST localhost:8082/i/oauth/clients \ + -H "Content-Type: application/json" \ + -d '{ + "applicationName":"foobar", + "redirectUrl":"https://example.com" + }' +) + +CLIENT_ID=$(echo "$CLIENT" | jq -r '.clientId') +CLIENT_SECRET=$(echo "$CLIENT" | jq -r '.clientSecret') + +AUTH_CODE=$( + curl -i -s -X POST localhost:8082/oauth/authorization/codes \ + -H 'Z-User: '"$USER" \ + -H "Content-Type: application/json" \ + -d '{ + "clientId": "'"$CLIENT_ID"'", + "scope": "'"$SCOPE"'", + "responseType": "code", + "redirectUri": "https://example.com", + "state": "foobar" + }' | + awk -F ': ' '/^Location/ {print $2}' | awk -F'[=&]' '{print $2}' +) + +ACCESS_TOKEN=$( + curl -s -X POST localhost:8082/oauth/token \ + -H "Content-Type: application/x-www-form-urlencoded" \ + -d 'code='"$AUTH_CODE"'&client_id='"$CLIENT_ID"'&grant_type=authorization_code&redirect_uri=https://example.com&client_secret='"$CLIENT_SECRET" | + jq -r '.accessToken' +) + +echo "client id : $CLIENT_ID" +echo "client secret: $CLIENT_SECRET" +echo "scope : $SCOPE" +echo "auth code : $AUTH_CODE" +echo "access token : $ACCESS_TOKEN" + +echo "" +echo "making a request to /self..." +curl -s -H 'Z-OAUTH: Bearer '"$ACCESS_TOKEN" -H "Content-Type: application/json" localhost:8082/self | jq . diff --git a/hack/bin/upload-image.sh b/hack/bin/upload-image.sh index e49eaca08b..a070b8661b 100755 --- a/hack/bin/upload-image.sh +++ b/hack/bin/upload-image.sh @@ -35,29 +35,24 @@ function retry { local maxAttempts=$1 local secondsDelay=1 local attemptCount=1 - local output= shift 1 while [ $attemptCount -le "$maxAttempts" ]; do - output=$("$@") - local status=$? - - if [ $status -eq 0 ]; then + if "$@"; then break - fi - - if [ $attemptCount -lt "$maxAttempts" ]; then - echo "Command [$*] failed after attempt $attemptCount of $maxAttempts. Retrying in $secondsDelay second(s)." >&2 - sleep $secondsDelay - elif [ $attemptCount -eq "$maxAttempts" ]; then - echo "Command [$*] failed after $attemptCount attempt(s)" >&2 - return $status + else + local status=$? + if [ $attemptCount -lt "$maxAttempts" ]; then + echo "Command [$*] failed after attempt $attemptCount of $maxAttempts. Retrying in $secondsDelay second(s)." >&2 + sleep $secondsDelay + elif [ $attemptCount -eq "$maxAttempts" ]; then + echo "Command [$*] failed after $attemptCount attempt(s)" >&2 + return $status + fi fi attemptCount=$((attemptCount + 1)) secondsDelay=$((secondsDelay * 2)) done - - echo "$output" } tmp_link_store=$(mktemp -d) diff --git a/hack/helmfile-single.yaml b/hack/helmfile-single.yaml index 3d75ce0d8b..790412bf71 100644 --- a/hack/helmfile-single.yaml +++ b/hack/helmfile-single.yaml @@ -73,3 +73,5 @@ releases: value: {{ .Values.federationDomain }} - name: galley.config.settings.federationDomain value: {{ .Values.federationDomain }} + - name: cargohold.config.settings.federationDomain + value: {{ .Values.federationDomain }} diff --git a/libs/bilge/src/Bilge/Request.hs b/libs/bilge/src/Bilge/Request.hs index 1be626f8f1..aec0cf9c50 100644 --- a/libs/bilge/src/Bilge/Request.hs +++ b/libs/bilge/src/Bilge/Request.hs @@ -28,7 +28,11 @@ module Bilge.Request body, bytes, lbytes, + lbytesChunkedIO, + lbytesRefChunked, + lbytesRefPopper, json, + jsonChunkedIO, content, contentJson, contentProtobuf, @@ -79,7 +83,7 @@ import qualified Data.ByteString.Lazy.Char8 as LC import Data.CaseInsensitive (original) import Data.Id (RequestId (..)) import Imports hiding (intercalate) -import Network.HTTP.Client (Cookie, Request, RequestBody (..)) +import Network.HTTP.Client (Cookie, GivesPopper, Request, RequestBody (..)) import qualified Network.HTTP.Client as Rq import Network.HTTP.Client.Internal (CookieJar (..), brReadSome, throwHttp) import Network.HTTP.Types @@ -191,9 +195,61 @@ bytes = body . RequestBodyBS lbytes :: Lazy.ByteString -> Request -> Request lbytes = body . RequestBodyLBS +-- | Not suitable for @a@ which translates to very large JSON (more than a few megabytes) as the +-- bytestring produced by JSON will get computed and stored as it is in memory +-- in order to compute the @Content-Length@ header. For making a request with +-- big JSON objects, please use @lbytesRefChunked@ json :: ToJSON a => a -> Request -> Request json a = contentJson . lbytes (encode a) +-- | Like @lbytesChunkedIO@ but for sending a JSON body +jsonChunkedIO :: (ToJSON a, MonadIO m) => a -> m (Request -> Request) +jsonChunkedIO a = do + (contentJson .) <$> lbytesChunkedIO (encode a) + +-- | Makes requests with @Transfer-Encoding: chunked@ and no @Content-Length@ +-- header. Tries to ensures that the lazy bytestring is garbage collected as a +-- "chunk" of this bytestring is consumed. Note that it is not possible to +-- guarantee garbage collection as something else holding a reference to this +-- bytestring could stop that from happening. +-- +-- A more straightforward function like this will keep the reference to the +-- complete bytestring, which might be against the idea of using chunked +-- encoding: +-- +-- @ +-- lbytesChunked bs = body (RequestBodyStreamChunked $ lbytesPopper bs) +-- lbytesPopper bs needsPopper = do +-- ref <- newIORef $ LC.toChunks bs +-- lbytesRefPopper ref needsPopper +-- @ +-- +-- This is because the closure for @lbytesPopper@ keeps the reference to @bs@ +-- alive. To avoid this, this function allocates an @IORef@ and passes that to +-- @lbytesRefChunked@. +lbytesChunkedIO :: MonadIO m => Lazy.ByteString -> m (Request -> Request) +lbytesChunkedIO bs = do + chunksRef <- newIORef $ Lazy.toChunks bs + pure $ lbytesRefChunked chunksRef + +-- | Takes @IORef@ to chunks of strict @ByteString@ (perhaps) from a lazy +-- @Lazy.ByteString@, this helps the lazy bytestring get garbage collected as it +-- gets consumed. The request made will have @Transfer-Encoding: chunked@ and no +-- @Content-Length@ header. +-- +-- See @lbytesChunkedIO@ for reference usage. +lbytesRefChunked :: IORef [ByteString] -> Request -> Request +lbytesRefChunked chunksRef = + body (RequestBodyStreamChunked $ lbytesRefPopper chunksRef) + +lbytesRefPopper :: IORef [ByteString] -> GivesPopper () +lbytesRefPopper chunksRef needsPopper = do + let popper = do + atomicModifyIORef chunksRef $ \case + [] -> ([], mempty) + (c : cs) -> (cs, c) + needsPopper popper + accept :: ByteString -> Request -> Request accept = header hAccept diff --git a/libs/extended/default.nix b/libs/extended/default.nix index d51ab0466c..86c634508d 100644 --- a/libs/extended/default.nix +++ b/libs/extended/default.nix @@ -19,6 +19,7 @@ , lib , metrics-wai , optparse-applicative +, resourcet , servant , servant-server , servant-swagger @@ -44,6 +45,7 @@ mkDerivation { imports metrics-wai optparse-applicative + resourcet servant servant-server servant-swagger diff --git a/libs/extended/extended.cabal b/libs/extended/extended.cabal index 9d64c15b6c..b58883b2c1 100644 --- a/libs/extended/extended.cabal +++ b/libs/extended/extended.cabal @@ -20,6 +20,7 @@ library exposed-modules: Options.Applicative.Extended Servant.API.Extended + Servant.API.Extended.RawM System.Logger.Extended other-modules: Paths_extended @@ -81,6 +82,7 @@ library , imports , metrics-wai , optparse-applicative + , resourcet , servant , servant-server , servant-swagger diff --git a/libs/extended/src/Servant/API/Extended/RawM.hs b/libs/extended/src/Servant/API/Extended/RawM.hs new file mode 100644 index 0000000000..9f1e1a6395 --- /dev/null +++ b/libs/extended/src/Servant/API/Extended/RawM.hs @@ -0,0 +1,58 @@ +-- | copy of https://github.com/haskell-servant/servant/pull/1551 while we're waiting for this +-- to be released. this was needed in https://github.com/wireapp/wire-server/pull/2848/, but +-- then in the end it wasn't. we keep it here in the hope that whoever needs it next will +-- have an easier time putting it to work. +module Servant.API.Extended.RawM where + +import Control.Monad.Trans.Resource +import Data.Metrics.Servant +import Data.Proxy +import Imports +import Network.Wai +import Servant.API (Raw) +import Servant.Server hiding (respond) +import Servant.Server.Internal.Delayed +import Servant.Server.Internal.RouteResult +import Servant.Server.Internal.Router +import Servant.Swagger + +type ApplicationM m = Request -> (Response -> IO ResponseReceived) -> m ResponseReceived + +-- | Variant of 'Raw' that lets you access the underlying monadic context to process the request. +data RawM deriving (Typeable) + +-- | Just pass the request to the underlying application and serve its response. +-- +-- Example: +-- +-- > type MyApi = "images" :> RawM +-- > +-- > server :: Server MyApi +-- > server = serveDirectory "/var/www/images" +instance HasServer RawM context where + type ServerT RawM m = Request -> (Response -> IO ResponseReceived) -> m ResponseReceived + + route :: + Proxy RawM -> + Context context -> + Delayed env (Request -> (Response -> IO ResponseReceived) -> Handler ResponseReceived) -> + Router env + route _ _ handleDelayed = RawRouter $ \env request respond -> runResourceT $ do + routeResult <- runDelayed handleDelayed env request + let respond' = liftIO . respond + liftIO $ case routeResult of + Route handler -> + runHandler (handler request (respond . Route)) + >>= \case + Left e -> respond' $ FailFatal e + Right a -> pure a + Fail e -> respond' $ Fail e + FailFatal e -> respond' $ FailFatal e + + hoistServerWithContext _ _ f srvM req respond = f (srvM req respond) + +instance HasSwagger RawM where + toSwagger _ = toSwagger (Proxy @Raw) + +instance RoutesToPaths RawM where + getRoutes = [] diff --git a/libs/types-common-aws/default.nix b/libs/types-common-aws/default.nix index 647dd6884d..a296d9c9d8 100644 --- a/libs/types-common-aws/default.nix +++ b/libs/types-common-aws/default.nix @@ -4,6 +4,7 @@ # dependencies are added or removed. { mkDerivation , amazonka +, amazonka-core , amazonka-sqs , base , base64-bytestring @@ -27,6 +28,7 @@ mkDerivation { src = gitignoreSource ./.; libraryHaskellDepends = [ amazonka + amazonka-core amazonka-sqs base base64-bytestring diff --git a/libs/types-common-aws/src/AWS/Util.hs b/libs/types-common-aws/src/AWS/Util.hs index 1eff3fe67e..a2a2a0055c 100644 --- a/libs/types-common-aws/src/AWS/Util.hs +++ b/libs/types-common-aws/src/AWS/Util.hs @@ -18,15 +18,16 @@ module AWS.Util where import qualified Amazonka as AWS +import qualified Amazonka.Data.Time as AWS import Data.Time import Imports readAuthExpiration :: AWS.Env -> IO (Maybe NominalDiffTime) readAuthExpiration env = do authEnv <- - case runIdentity (AWS.envAuth env) of + case runIdentity (AWS.auth env) of AWS.Auth authEnv -> pure authEnv AWS.Ref _ ref -> do readIORef ref now <- getCurrentTime - pure $ (`diffUTCTime` now) . AWS.fromTime <$> AWS._authExpiration authEnv + pure $ (`diffUTCTime` now) . AWS.fromTime <$> AWS.expiration authEnv diff --git a/libs/types-common-aws/types-common-aws.cabal b/libs/types-common-aws/types-common-aws.cabal index 7d8813a2b7..120d78603f 100644 --- a/libs/types-common-aws/types-common-aws.cabal +++ b/libs/types-common-aws/types-common-aws.cabal @@ -75,6 +75,7 @@ library ghc-prof-options: -fprof-auto-exported build-depends: amazonka + , amazonka-core , amazonka-sqs , base >=4 && <5 , base64-bytestring >=1.0 diff --git a/libs/wire-api-federation/src/Wire/API/Federation/API.hs b/libs/wire-api-federation/src/Wire/API/Federation/API.hs index 7d55f99152..7fc6e981b0 100644 --- a/libs/wire-api-federation/src/Wire/API/Federation/API.hs +++ b/libs/wire-api-federation/src/Wire/API/Federation/API.hs @@ -18,8 +18,11 @@ module Wire.API.Federation.API ( FedApi, HasFedEndpoint, + HasUnsafeFedEndpoint, fedClient, fedClientIn, + unsafeFedClientIn, + module Wire.API.MakesFederatedCall, -- * Re-exports Component (..), @@ -35,7 +38,7 @@ import Wire.API.Federation.API.Brig import Wire.API.Federation.API.Cargohold import Wire.API.Federation.API.Galley import Wire.API.Federation.Client -import Wire.API.Federation.Component +import Wire.API.MakesFederatedCall import Wire.API.Routes.Named -- Note: this type family being injective means that in most cases there is no need @@ -48,12 +51,17 @@ type instance FedApi 'Brig = BrigApi type instance FedApi 'Cargohold = CargoholdApi -type HasFedEndpoint comp api name = ('Just api ~ LookupEndpoint (FedApi comp) name) +type HasFedEndpoint comp api name = (HasUnsafeFedEndpoint comp api name, CallsFed comp name) + +-- | Like 'HasFedEndpoint', but doesn't propagate a 'CallsFed' constraint. +-- Useful for tests, but unsafe in the sense that incorrect usage will allow +-- you to forget about some federated calls. +type HasUnsafeFedEndpoint comp api name = 'Just api ~ LookupEndpoint (FedApi comp) name -- | Return a client for a named endpoint. fedClient :: forall (comp :: Component) (name :: Symbol) m api. - (HasFedEndpoint comp api name, HasClient m api, m ~ FederatorClient comp) => + (CallsFed comp name, HasFedEndpoint comp api name, HasClient m api, m ~ FederatorClient comp) => Client m api fedClient = clientIn (Proxy @api) (Proxy @m) @@ -62,3 +70,11 @@ fedClientIn :: (HasFedEndpoint comp api name, HasClient m api) => Client m api fedClientIn = clientIn (Proxy @api) (Proxy @m) + +-- | Like 'fedClientIn', but doesn't propagate a 'CallsFed' constraint. Inteded +-- to be used in test situations only. +unsafeFedClientIn :: + forall (comp :: Component) (name :: Symbol) m api. + (HasUnsafeFedEndpoint comp api name, HasClient m api) => + Client m api +unsafeFedClientIn = clientIn (Proxy @api) (Proxy @m) diff --git a/libs/wire-api-federation/src/Wire/API/Federation/API/Galley.hs b/libs/wire-api-federation/src/Wire/API/Federation/API/Galley.hs index 90fc7de3e3..fb32aa2451 100644 --- a/libs/wire-api-federation/src/Wire/API/Federation/API/Galley.hs +++ b/libs/wire-api-federation/src/Wire/API/Federation/API/Galley.hs @@ -36,6 +36,7 @@ import Wire.API.Error.Galley import Wire.API.Federation.API.Common import Wire.API.Federation.Endpoint import Wire.API.MLS.SubConversation +import Wire.API.MakesFederatedCall import Wire.API.Message import Wire.API.Routes.Public.Galley.Messaging import Wire.API.Util.Aeson (CustomEncoded (..)) @@ -59,21 +60,72 @@ type GalleyApi = -- used by the backend that owns a conversation to inform this backend of -- changes to the conversation :<|> FedEndpoint "on-conversation-updated" ConversationUpdate () - :<|> FedEndpoint "leave-conversation" LeaveConversationRequest LeaveConversationResponse + :<|> FedEndpointWithMods + '[ MakesFederatedCall 'Galley "on-conversation-updated", + MakesFederatedCall 'Galley "on-mls-message-sent", + MakesFederatedCall 'Galley "on-new-remote-conversation" + ] + "leave-conversation" + LeaveConversationRequest + LeaveConversationResponse -- used to notify this backend that a new message has been posted to a -- remote conversation :<|> FedEndpoint "on-message-sent" (RemoteMessage ConvId) () -- used by a remote backend to send a message to a conversation owned by -- this backend - :<|> FedEndpoint "send-message" ProteusMessageSendRequest MessageSendResponse - :<|> FedEndpoint "on-user-deleted-conversations" UserDeletedConversationsNotification EmptyResponse - :<|> FedEndpoint "update-conversation" ConversationUpdateRequest ConversationUpdateResponse + :<|> FedEndpointWithMods + '[ MakesFederatedCall 'Galley "on-message-sent", + MakesFederatedCall 'Brig "get-user-clients" + ] + "send-message" + ProteusMessageSendRequest + MessageSendResponse + :<|> FedEndpointWithMods + '[ MakesFederatedCall 'Galley "on-mls-message-sent", + MakesFederatedCall 'Galley "on-conversation-updated", + MakesFederatedCall 'Galley "on-new-remote-conversation" + ] + "on-user-deleted-conversations" + UserDeletedConversationsNotification + EmptyResponse + :<|> FedEndpointWithMods + '[ MakesFederatedCall 'Galley "on-conversation-updated", + MakesFederatedCall 'Galley "on-mls-message-sent", + MakesFederatedCall 'Galley "on-new-remote-conversation" + ] + "update-conversation" + ConversationUpdateRequest + ConversationUpdateResponse :<|> FedEndpoint "mls-welcome" MLSWelcomeRequest MLSWelcomeResponse :<|> FedEndpoint "on-mls-message-sent" RemoteMLSMessage RemoteMLSMessageResponse - :<|> FedEndpoint "send-mls-message" MLSMessageSendRequest MLSMessageResponse - :<|> FedEndpoint "send-mls-commit-bundle" MLSMessageSendRequest MLSMessageResponse + :<|> FedEndpointWithMods + '[ MakesFederatedCall 'Galley "on-conversation-updated", + MakesFederatedCall 'Galley "on-mls-message-sent", + MakesFederatedCall 'Galley "on-new-remote-conversation", + MakesFederatedCall 'Galley "send-mls-message", + MakesFederatedCall 'Brig "get-mls-clients" + ] + "send-mls-message" + MLSMessageSendRequest + MLSMessageResponse + :<|> FedEndpointWithMods + '[ MakesFederatedCall 'Galley "mls-welcome", + MakesFederatedCall 'Galley "on-conversation-updated", + MakesFederatedCall 'Galley "on-mls-message-sent", + MakesFederatedCall 'Galley "on-new-remote-conversation", + MakesFederatedCall 'Galley "send-mls-commit-bundle", + MakesFederatedCall 'Brig "get-mls-clients" + ] + "send-mls-commit-bundle" + MLSMessageSendRequest + MLSMessageResponse :<|> FedEndpoint "query-group-info" GetGroupInfoRequest GetGroupInfoResponse - :<|> FedEndpoint "on-client-removed" ClientRemovedRequest EmptyResponse + :<|> FedEndpointWithMods + '[ MakesFederatedCall 'Galley "on-mls-message-sent" + ] + "on-client-removed" + ClientRemovedRequest + EmptyResponse :<|> FedEndpoint "on-typing-indicator-updated" TypingDataUpdateRequest EmptyResponse data TypingDataUpdateRequest = TypingDataUpdateRequest diff --git a/libs/wire-api-federation/src/Wire/API/Federation/Component.hs b/libs/wire-api-federation/src/Wire/API/Federation/Component.hs index 908f3b01c4..73595904f7 100644 --- a/libs/wire-api-federation/src/Wire/API/Federation/Component.hs +++ b/libs/wire-api-federation/src/Wire/API/Federation/Component.hs @@ -15,18 +15,14 @@ -- You should have received a copy of the GNU Affero General Public License along -- with this program. If not, see . -module Wire.API.Federation.Component where +module Wire.API.Federation.Component + ( module Wire.API.Federation.Component, + Component (..), + ) +where import Imports -import Test.QuickCheck (Arbitrary) -import Wire.Arbitrary (GenericUniform (..)) - -data Component - = Brig - | Galley - | Cargohold - deriving (Show, Eq, Generic) - deriving (Arbitrary) via (GenericUniform Component) +import Wire.API.MakesFederatedCall (Component (..)) parseComponent :: Text -> Maybe Component parseComponent "brig" = Just Brig diff --git a/libs/wire-api-federation/src/Wire/API/Federation/Endpoint.hs b/libs/wire-api-federation/src/Wire/API/Federation/Endpoint.hs index cada1b4872..8c6367f249 100644 --- a/libs/wire-api-federation/src/Wire/API/Federation/Endpoint.hs +++ b/libs/wire-api-federation/src/Wire/API/Federation/Endpoint.hs @@ -15,16 +15,17 @@ -- You should have received a copy of the GNU Affero General Public License along -- with this program. If not, see . -module Wire.API.Federation.Endpoint where +module Wire.API.Federation.Endpoint + ( ApplyMods, + module Wire.API.Federation.Endpoint, + ) +where import Servant.API +import Wire.API.ApplyMods import Wire.API.Federation.Domain import Wire.API.Routes.Named -type family ApplyMods (mods :: [*]) api where - ApplyMods '[] api = api - ApplyMods (x ': xs) api = x :> ApplyMods xs api - type FedEndpointWithMods (mods :: [*]) name input output = Named name diff --git a/services/galley/src/Galley/Effects/RemoteConversationListStore.hs b/libs/wire-api/src/Wire/API/ApplyMods.hs similarity index 52% rename from services/galley/src/Galley/Effects/RemoteConversationListStore.hs rename to libs/wire-api/src/Wire/API/ApplyMods.hs index 54a076818a..ad65fdb28e 100644 --- a/services/galley/src/Galley/Effects/RemoteConversationListStore.hs +++ b/libs/wire-api/src/Wire/API/ApplyMods.hs @@ -1,5 +1,3 @@ -{-# LANGUAGE TemplateHaskell #-} - -- This file is part of the Wire Server implementation. -- -- Copyright (C) 2022 Wire Swiss GmbH @@ -17,29 +15,10 @@ -- You should have received a copy of the GNU Affero General Public License along -- with this program. If not, see . -module Galley.Effects.RemoteConversationListStore - ( RemoteConversationListStore (..), - listRemoteConversations, - getRemoteConversationStatus, - ) -where - -import Data.Id -import Data.Qualified -import Galley.Types.Conversations.Members -import Imports -import Polysemy -import Wire.Sem.Paging +module Wire.API.ApplyMods where -data RemoteConversationListStore p m a where - ListRemoteConversations :: - UserId -> - Maybe (PagingState p (Remote ConvId)) -> - Int32 -> - RemoteConversationListStore p m (Page p (Remote ConvId)) - GetRemoteConversationStatus :: - UserId -> - [Remote ConvId] -> - RemoteConversationListStore p m (Map (Remote ConvId) MemberStatus) +import Servant.API -makeSem ''RemoteConversationListStore +type family ApplyMods (mods :: [*]) api where + ApplyMods '[] api = api + ApplyMods (x ': xs) api = x :> ApplyMods xs api diff --git a/libs/wire-api/src/Wire/API/Conversation.hs b/libs/wire-api/src/Wire/API/Conversation.hs index e89ae46107..08982bc472 100644 --- a/libs/wire-api/src/Wire/API/Conversation.hs +++ b/libs/wire-api/src/Wire/API/Conversation.hs @@ -25,6 +25,7 @@ module Wire.API.Conversation ConversationMetadata (..), defConversationMetadata, Conversation (..), + conversationSchema, cnvType, cnvCreator, cnvAccess, @@ -162,6 +163,10 @@ defConversationMetadata creator = cnvmReceiptMode = Nothing } +accessRolesVersionedSchema :: Version -> ObjectSchema SwaggerDoc (Set AccessRole) +accessRolesVersionedSchema v = + if v > V2 then accessRolesSchema else accessRolesSchemaV2 + accessRolesSchema :: ObjectSchema SwaggerDoc (Set AccessRole) accessRolesSchema = field "access_role" (set schema) @@ -269,15 +274,15 @@ cnvReceiptMode :: Conversation -> Maybe ReceiptMode cnvReceiptMode = cnvmReceiptMode . cnvMetadata instance ToSchema Conversation where - schema = conversationSchema accessRolesSchema + schema = conversationSchema V3 instance ToSchema (Versioned 'V2 Conversation) where - schema = Versioned <$> unVersioned .= conversationSchema accessRolesSchemaV2 + schema = Versioned <$> unVersioned .= conversationSchema V2 conversationSchema :: - ObjectSchema SwaggerDoc (Set AccessRole) -> + Version -> ValueSchema NamedSwaggerDoc Conversation -conversationSchema sch = +conversationSchema v = objectWithDocModifier "Conversation" (description ?~ "A conversation object as returned from the server") @@ -285,7 +290,7 @@ conversationSchema sch = <$> cnvQualifiedId .= field "qualified_id" schema <* (qUnqualified . cnvQualifiedId) .= optional (field "id" (deprecatedSchema "qualified_id" schema)) - <*> cnvMetadata .= conversationMetadataObjectSchema sch + <*> cnvMetadata .= conversationMetadataObjectSchema (accessRolesVersionedSchema v) <*> cnvMembers .= field "members" schema <*> cnvProtocol .= protocolSchema @@ -371,7 +376,7 @@ instance ToSchema (Versioned 'V2 (ConversationList Conversation)) where schema = Versioned <$> unVersioned - .= conversationListSchema (conversationSchema accessRolesSchemaV2) + .= conversationListSchema (conversationSchema V2) conversationListSchema :: forall a. @@ -433,24 +438,24 @@ data ConversationsResponse = ConversationsResponse deriving (FromJSON, ToJSON, S.ToSchema) via Schema ConversationsResponse conversationsResponseSchema :: - ObjectSchema SwaggerDoc (Set AccessRole) -> + Version -> ValueSchema NamedSwaggerDoc ConversationsResponse -conversationsResponseSchema sch = +conversationsResponseSchema v = let notFoundDoc = description ?~ "These conversations either don't exist or are deleted." failedDoc = description ?~ "The server failed to fetch these conversations, most likely due to network issues while contacting a remote server" in objectWithDocModifier "ConversationsResponse" (description ?~ "Response object for getting metadata of a list of conversations") $ ConversationsResponse - <$> crFound .= field "found" (array (conversationSchema sch)) + <$> crFound .= field "found" (array (conversationSchema v)) <*> crNotFound .= fieldWithDocModifier "not_found" notFoundDoc (array schema) <*> crFailed .= fieldWithDocModifier "failed" failedDoc (array schema) instance ToSchema ConversationsResponse where - schema = conversationsResponseSchema accessRolesSchema + schema = conversationsResponseSchema V3 instance ToSchema (Versioned 'V2 ConversationsResponse) where - schema = Versioned <$> unVersioned .= conversationsResponseSchema accessRolesSchemaV2 + schema = Versioned <$> unVersioned .= conversationsResponseSchema V2 -------------------------------------------------------------------------------- -- Conversation properties @@ -889,14 +894,11 @@ conversationAccessDataSchema v = object ("ConversationAccessData" <> suffix) $ ConversationAccessData <$> cupAccess .= field "access" (set schema) - <*> cupAccessRoles .= sch + <*> cupAccessRoles .= accessRolesVersionedSchema v where suffix | v == maxBound = "" | otherwise = toUrlPiece v - sch = case v of - V2 -> accessRolesSchemaV2 - _ -> accessRolesSchema instance ToSchema ConversationAccessData where schema = conversationAccessDataSchema V3 diff --git a/libs/wire-api/src/Wire/API/Event/Conversation.hs b/libs/wire-api/src/Wire/API/Event/Conversation.hs index 81ff6df00b..326fce09cf 100644 --- a/libs/wire-api/src/Wire/API/Event/Conversation.hs +++ b/libs/wire-api/src/Wire/API/Event/Conversation.hs @@ -385,7 +385,7 @@ taggedEventDataSchema = (unnamed (conversationAccessDataSchema V2)) ConvCodeUpdate -> tag _EdConvCodeUpdate (unnamed schema) ConvConnect -> tag _EdConnect (unnamed schema) - ConvCreate -> tag _EdConversation (unnamed schema) + ConvCreate -> tag _EdConversation (unnamed (conversationSchema V2)) ConvMessageTimerUpdate -> tag _EdConvMessageTimerUpdate (unnamed schema) ConvReceiptModeUpdate -> tag _EdConvReceiptModeUpdate (unnamed schema) OtrMessageAdd -> tag _EdOtrMessage (unnamed schema) diff --git a/libs/wire-api/src/Wire/API/MakesFederatedCall.hs b/libs/wire-api/src/Wire/API/MakesFederatedCall.hs new file mode 100644 index 0000000000..a6abb32dc0 --- /dev/null +++ b/libs/wire-api/src/Wire/API/MakesFederatedCall.hs @@ -0,0 +1,143 @@ +-- This file is part of the Wire Server implementation. +-- +-- Copyright (C) 2022 Wire Swiss GmbH +-- +-- This program is free software: you can redistribute it and/or modify it under +-- the terms of the GNU Affero General Public License as published by the Free +-- Software Foundation, either version 3 of the License, or (at your option) any +-- later version. +-- +-- This program is distributed in the hope that it will be useful, but WITHOUT +-- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS +-- FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more +-- details. +-- +-- You should have received a copy of the GNU Affero General Public License along +-- with this program. If not, see . +{-# LANGUAGE OverloadedLists #-} + +module Wire.API.MakesFederatedCall + ( CallsFed, + MakesFederatedCall, + Component (..), + callsFed, + unsafeCallsFed, + ) +where + +import Data.Aeson (Value (..)) +import Data.Constraint +import Data.Metrics.Servant +import Data.Proxy +import Data.Swagger.Operation (addExtensions) +import qualified Data.Text as T +import GHC.TypeLits +import Imports +import Servant.API +import Servant.Client +import Servant.Server +import Servant.Swagger +import Test.QuickCheck (Arbitrary) +import Unsafe.Coerce (unsafeCoerce) +import Wire.Arbitrary (GenericUniform (..)) + +data Component + = Brig + | Galley + | Cargohold + deriving (Show, Eq, Generic) + deriving (Arbitrary) via (GenericUniform Component) + +-- | A typeclass corresponding to calls to federated services. This class has +-- no methods, and exists only to automatically propagate information up to +-- servant. +-- +-- The only way to discharge this constraint is via 'callsFed', which should be +-- invoked for each federated call when connecting handlers to the server +-- definition. +class CallsFed (comp :: Component) (name :: Symbol) + +-- | A typeclass with the same layout as 'CallsFed', which exists only so we +-- can discharge 'CallsFeds' constraints by unsafely coercing this one. +class Nullary + +instance Nullary + +-- | Construct a dictionary for 'CallsFed'. +synthesizeCallsFed :: forall (comp :: Component) (name :: Symbol). Dict (CallsFed comp name) +synthesizeCallsFed = unsafeCoerce $ Dict @Nullary + +-- | Servant combinator for tracking calls to federated calls. Annotating API +-- endpoints with 'MakesFederatedCall' is the only way to eliminate 'CallsFed' +-- constraints on handlers. +data MakesFederatedCall (comp :: Component) (name :: Symbol) + +instance (HasServer api ctx) => HasServer (MakesFederatedCall comp name :> api :: *) ctx where + -- \| This should have type @CallsFed comp name => ServerT api m@, but GHC + -- complains loudly thinking this is a polytype. We need to introduce the + -- 'CallsFed' constraint so that we can eliminate it via + -- 'synthesizeCallsFed', which otherwise is too-high rank for GHC to notice + -- we've solved our constraint. + type ServerT (MakesFederatedCall comp name :> api) m = Dict (CallsFed comp name) -> ServerT api m + route _ ctx f = route (Proxy @api) ctx $ fmap ($ synthesizeCallsFed @comp @name) f + hoistServerWithContext _ ctx f s = hoistServerWithContext (Proxy @api) ctx f . s + +instance HasLink api => HasLink (MakesFederatedCall comp name :> api :: *) where + type MkLink (MakesFederatedCall comp name :> api) x = MkLink api x + toLink f _ l = toLink f (Proxy @api) l + +instance RoutesToPaths api => RoutesToPaths (MakesFederatedCall comp name :> api :: *) where + getRoutes = getRoutes @api + +-- | Get a symbol representation of our component. +type family ShowComponent (x :: Component) :: Symbol where + ShowComponent 'Brig = "brig" + ShowComponent 'Galley = "galley" + ShowComponent 'Cargohold = "cargohold" + +-- | 'MakesFederatedCall' annotates the swagger documentation with an extension +-- tag @x-wire-makes-federated-calls-to@. +instance (HasSwagger api, KnownSymbol name, KnownSymbol (ShowComponent comp)) => HasSwagger (MakesFederatedCall comp name :> api :: *) where + toSwagger _ = + toSwagger (Proxy @api) + & addExtensions + mergeJSONArray + [ ( "wire-makes-federated-call-to", + Array + [ Array + [ String $ T.pack $ symbolVal $ Proxy @(ShowComponent comp), + String $ T.pack $ symbolVal $ Proxy @name + ] + ] + ) + ] + +mergeJSONArray :: Value -> Value -> Value +mergeJSONArray (Array x) (Array y) = Array $ x <> y +mergeJSONArray _ _ = error "impossible! bug in construction of federated calls JSON" + +instance HasClient m api => HasClient m (MakesFederatedCall comp name :> api :: *) where + type Client m (MakesFederatedCall comp name :> api) = Client m api + clientWithRoute p _ = clientWithRoute p $ Proxy @api + hoistClientMonad p _ f c = hoistClientMonad p (Proxy @api) f c + +-- | Type class to automatically lift a function of the form @(c1, c2, ...) => +-- r@ into @Dict c1 -> Dict c2 -> ... -> r@. +class SolveCallsFed c r a where + -- | Safely discharge a 'CallsFed' constraint. Intended to be used when + -- connecting your handler to the server router. + callsFed :: (c => r) -> a + +instance (c ~ ((k, d) :: Constraint), SolveCallsFed d r a) => SolveCallsFed c r (Dict k -> a) where + callsFed f Dict = callsFed @d @r @a f + +instance {-# OVERLAPPABLE #-} (c ~ (() :: Constraint), r ~ a) => SolveCallsFed c r a where + callsFed f = f + +-- | Unsafely discharge a 'CallsFed' constraint. Necessary for interacting with +-- wai-routes. +-- +-- This is unsafe in the sense that it will drop the 'CallsFed' constraint, and +-- thus might mean a federated call gets forgotten in the documentation. +unsafeCallsFed :: forall (comp :: Component) (name :: Symbol) r. (CallsFed comp name => r) -> r +unsafeCallsFed f = withDict (synthesizeCallsFed @comp @name) f diff --git a/libs/wire-api/src/Wire/API/OAuth.hs b/libs/wire-api/src/Wire/API/OAuth.hs index 3a30041161..e7100200bb 100644 --- a/libs/wire-api/src/Wire/API/OAuth.hs +++ b/libs/wire-api/src/Wire/API/OAuth.hs @@ -218,7 +218,10 @@ instance ToSchema NewOAuthAuthCode where <*> noacState .= field "state" schema newtype OAuthAuthCode = OAuthAuthCode {unOAuthAuthCode :: AsciiBase16} - deriving (Show, Eq, Generic) + deriving (Eq, Generic) + +instance Show OAuthAuthCode where + show _ = "" instance ToSchema OAuthAuthCode where schema = (toText . unOAuthAuthCode) .= parsedText "OAuthAuthCode" (fmap OAuthAuthCode . validateBase16) @@ -313,35 +316,42 @@ instance ToSchema OAuthAccessTokenType where [ element "Bearer" OAuthAccessTokenTypeBearer ] -newtype OAuthAccessToken = OAuthAccessToken {unOAuthAccessToken :: SignedJWT} +data TokenTag = Access | Refresh + +newtype OAuthToken a = OAuthToken {unOAuthToken :: SignedJWT} deriving (Show, Eq, Generic) - deriving (A.ToJSON, A.FromJSON, S.ToSchema) via Schema OAuthAccessToken + deriving (A.ToJSON, A.FromJSON, S.ToSchema) via Schema (OAuthToken a) -instance ToByteString OAuthAccessToken where - builder = builder . encodeCompact . unOAuthAccessToken +instance ToByteString (OAuthToken a) where + builder = builder . encodeCompact . unOAuthToken -instance FromByteString OAuthAccessToken where +instance FromByteString (OAuthToken a) where parser = do t <- parser @Text case decodeCompact (cs (TE.encodeUtf8 t)) of Left (err :: JWTError) -> fail $ show err - Right jwt -> pure $ OAuthAccessToken jwt + Right jwt -> pure $ OAuthToken jwt -instance ToHttpApiData OAuthAccessToken where +instance ToHttpApiData (OAuthToken a) where toHeader = toByteString' toUrlPiece = cs . toHeader -instance FromHttpApiData OAuthAccessToken where +instance FromHttpApiData (OAuthToken a) where parseHeader = either (Left . cs) pure . runParser parser . cs parseUrlPiece = parseHeader . cs -instance ToSchema OAuthAccessToken where +instance ToSchema (OAuthToken a) where schema = (TE.decodeUtf8 . toByteString') .= withParser schema (either fail pure . runParser parser . cs) +type OAuthAccessToken = OAuthToken 'Access + +type OAuthRefreshToken = OAuthToken 'Refresh + data OAuthAccessTokenResponse = OAuthAccessTokenResponse { oatAccessToken :: OAuthAccessToken, oatTokenType :: OAuthAccessTokenType, - oatExpiresIn :: NominalDiffTime + oatExpiresIn :: NominalDiffTime, + oatRefreshToken :: OAuthRefreshToken } deriving (Eq, Show, Generic) deriving (A.ToJSON, A.FromJSON, S.ToSchema) via (Schema OAuthAccessTokenResponse) @@ -353,39 +363,40 @@ instance ToSchema OAuthAccessTokenResponse where <$> oatAccessToken .= field "accessToken" schema <*> oatTokenType .= field "tokenType" schema <*> oatExpiresIn .= field "expiresIn" (fromIntegral <$> roundDiffTime .= schema) + <*> oatRefreshToken .= field "refreshToken" schema where roundDiffTime :: NominalDiffTime -> Int32 roundDiffTime = round -data OAuthClaimSet = OAuthClaimSet {jwtClaims :: ClaimsSet, scope :: OAuthScopes} +data OAuthsClaimSet = OAuthsClaimSet {jwtClaims :: ClaimsSet, scope :: OAuthScopes} deriving (Eq, Show, Generic) -instance HasClaimsSet OAuthClaimSet where +instance HasClaimsSet OAuthsClaimSet where claimsSet f s = fmap (\a' -> s {jwtClaims = a'}) (f (jwtClaims s)) -instance A.FromJSON OAuthClaimSet where - parseJSON = A.withObject "OAuthClaimSet" $ \o -> - OAuthClaimSet +instance A.FromJSON OAuthsClaimSet where + parseJSON = A.withObject "OAuthsClaimSet" $ \o -> + OAuthsClaimSet <$> A.parseJSON (A.Object o) <*> o A..: "scope" -instance A.ToJSON OAuthClaimSet where +instance A.ToJSON OAuthsClaimSet where toJSON s = ins "scope" (scope s) (A.toJSON (jwtClaims s)) where ins k v (A.Object o) = A.Object $ M.insert k (A.toJSON v) o ins _ _ a = a -csUserId :: OAuthClaimSet -> Maybe UserId +csUserId :: OAuthsClaimSet -> Maybe UserId csUserId = view claimSub >=> preview string >=> either (const Nothing) pure . parseIdFromText -hasScope :: OAuthScope -> OAuthClaimSet -> Bool +hasScope :: OAuthScope -> OAuthsClaimSet -> Bool hasScope s claims = s `Set.member` unOAuthScopes (scope claims) -verify :: JWK -> SignedJWT -> IO (Either JWTError OAuthClaimSet) +verify :: JWK -> SignedJWT -> IO (Either JWTError OAuthsClaimSet) verify k jwt = runJOSE $ do let audCheck = const True verifyJWT (defaultJWTValidationSettings audCheck) k jwt diff --git a/libs/wire-api/src/Wire/API/Routes/Internal/Brig.hs b/libs/wire-api/src/Wire/API/Routes/Internal/Brig.hs index 3192f9ca00..c42b16e029 100644 --- a/libs/wire-api/src/Wire/API/Routes/Internal/Brig.hs +++ b/libs/wire-api/src/Wire/API/Routes/Internal/Brig.hs @@ -54,6 +54,7 @@ import Wire.API.Error import Wire.API.Error.Brig import Wire.API.MLS.Credential import Wire.API.MLS.KeyPackage +import Wire.API.MakesFederatedCall import Wire.API.Routes.Internal.Brig.Connection import Wire.API.Routes.Internal.Brig.EJPD import qualified Wire.API.Routes.Internal.Galley.TeamFeatureNoConfigMulti as Multi @@ -151,6 +152,7 @@ type AccountAPI = Named "createUserNoVerify" ( "users" + :> MakesFederatedCall 'Brig "on-user-deleted-connections" :> ReqBody '[Servant.JSON] NewUser :> MultiVerb 'POST '[Servant.JSON] RegisterInternalResponses (Either RegisterError SelfProfile) ) @@ -158,6 +160,7 @@ type AccountAPI = "createUserNoVerifySpar" ( "users" :> "spar" + :> MakesFederatedCall 'Brig "on-user-deleted-connections" :> ReqBody '[Servant.JSON] NewUserSpar :> MultiVerb 'POST '[Servant.JSON] CreateUserSparInternalResponses (Either CreateUserSparError SelfProfile) ) @@ -366,12 +369,14 @@ type AuthAPI = Named "legalhold-login" ( "legalhold-login" + :> MakesFederatedCall 'Brig "on-user-deleted-connections" :> ReqBody '[JSON] LegalHoldLogin :> MultiVerb1 'POST '[JSON] TokenResponse ) :<|> Named "sso-login" ( "sso-login" + :> MakesFederatedCall 'Brig "on-user-deleted-connections" :> ReqBody '[JSON] SsoLogin :> QueryParam' [Optional, Strict] "persist" Bool :> MultiVerb1 'POST '[JSON] TokenResponse diff --git a/libs/wire-api/src/Wire/API/Routes/Public.hs b/libs/wire-api/src/Wire/API/Routes/Public.hs index 8bec0f4df3..b34896871f 100644 --- a/libs/wire-api/src/Wire/API/Routes/Public.hs +++ b/libs/wire-api/src/Wire/API/Routes/Public.hs @@ -310,7 +310,7 @@ checkZAuthOrOAuth oauthScope mJwk req = maybe tryOAuth (pure . Right) tryZUserAu verifyOAuthToken :: (Bearer OAuthAccessToken, JWK) -> DelayedIO (Either ServerError UserId) verifyOAuthToken (token, key) = do - verifiedOrError <- mapLeft (invalidOAuthToken . cs . show) <$> liftIO (verify key (unOAuthAccessToken . unBearer $ token)) + verifiedOrError <- mapLeft (invalidOAuthToken . cs . show) <$> liftIO (verify key (unOAuthToken . unBearer $ token)) pure $ verifiedOrError >>= \claimSet -> if hasScope oauthScope claimSet diff --git a/libs/wire-api/src/Wire/API/Routes/Public/Brig.hs b/libs/wire-api/src/Wire/API/Routes/Public/Brig.hs index 6e8ac94b75..830465f8a1 100644 --- a/libs/wire-api/src/Wire/API/Routes/Public/Brig.hs +++ b/libs/wire-api/src/Wire/API/Routes/Public/Brig.hs @@ -48,6 +48,7 @@ import Wire.API.Error.Brig import Wire.API.Error.Empty import Wire.API.MLS.KeyPackage import Wire.API.MLS.Servant +import Wire.API.MakesFederatedCall import Wire.API.OAuth import Wire.API.Properties import Wire.API.Routes.Bearer @@ -141,6 +142,7 @@ type UserAPI = Named "get-user-unqualified" ( Summary "Get a user by UserId" + :> MakesFederatedCall 'Brig "get-users-by-ids" :> Until 'V2 :> ZUser :> "users" @@ -152,6 +154,7 @@ type UserAPI = Named "get-user-qualified" ( Summary "Get a user by Domain and UserId" + :> MakesFederatedCall 'Brig "get-users-by-ids" :> ZUser :> "users" :> QualifiedCaptureUserId "uid" @@ -172,6 +175,8 @@ type UserAPI = "get-handle-info-unqualified" ( Summary "(deprecated, use /search/contacts) Get information on a user handle" :> Until 'V2 + :> MakesFederatedCall 'Brig "get-user-by-handle" + :> MakesFederatedCall 'Brig "get-users-by-ids" :> ZUser :> "users" :> "handles" @@ -188,6 +193,8 @@ type UserAPI = "get-user-by-handle-qualified" ( Summary "(deprecated, use /search/contacts) Get information on a user handle" :> Until 'V2 + :> MakesFederatedCall 'Brig "get-user-by-handle" + :> MakesFederatedCall 'Brig "get-users-by-ids" :> ZUser :> "users" :> "by-handle" @@ -207,6 +214,7 @@ type UserAPI = ( Summary "List users (deprecated)" :> Until 'V2 :> Description "The 'ids' and 'handles' parameters are mutually exclusive." + :> MakesFederatedCall 'Brig "get-users-by-ids" :> ZUser :> "users" :> QueryParam' [Optional, Strict, Description "User IDs of users to fetch"] "ids" (CommaSeparatedList UserId) @@ -219,6 +227,7 @@ type UserAPI = "list-users-by-ids-or-handles" ( Summary "List users" :> Description "The 'qualified_ids' and 'qualified_handles' parameters are mutually exclusive." + :> MakesFederatedCall 'Brig "get-users-by-ids" :> ZUser :> "list-users" :> ReqBody '[JSON] ListUsersQuery @@ -275,6 +284,7 @@ type SelfAPI = :> CanThrow 'MissingAuth :> CanThrow 'DeleteCodePending :> CanThrow 'OwnerDeletingSelf + :> MakesFederatedCall 'Brig "on-user-deleted-connections" :> ZUser :> "self" :> ReqBody '[JSON] DeleteUser @@ -286,6 +296,7 @@ type SelfAPI = Named "put-self" ( Summary "Update your profile." + :> MakesFederatedCall 'Brig "on-user-deleted-connections" :> ZUser :> ZConn :> "self" @@ -311,6 +322,7 @@ type SelfAPI = :> Description "Your phone number can only be removed if you also have an \ \email address and a password." + :> MakesFederatedCall 'Brig "on-user-deleted-connections" :> ZUser :> ZConn :> "self" @@ -326,6 +338,7 @@ type SelfAPI = :> Description "Your email address can only be removed if you also have a \ \phone number." + :> MakesFederatedCall 'Brig "on-user-deleted-connections" :> ZUser :> ZConn :> "self" @@ -358,6 +371,7 @@ type SelfAPI = :<|> Named "change-locale" ( Summary "Change your locale." + :> MakesFederatedCall 'Brig "on-user-deleted-connections" :> ZUser :> ZConn :> "self" @@ -368,6 +382,7 @@ type SelfAPI = :<|> Named "change-handle" ( Summary "Change your handle." + :> MakesFederatedCall 'Brig "on-user-deleted-connections" :> ZUser :> ZConn :> "self" @@ -419,6 +434,7 @@ type AccountAPI = "If the environment where the registration takes \ \place is private and a registered email address or phone \ \number is not whitelisted, a 403 error is returned." + :> MakesFederatedCall 'Brig "on-user-deleted-connections" :> "register" :> ReqBody '[JSON] NewUserPublic :> MultiVerb 'POST '[JSON] RegisterResponses (Either RegisterError RegisterSuccess) @@ -429,6 +445,7 @@ type AccountAPI = :<|> Named "verify-delete" ( Summary "Verify account deletion with a code." + :> MakesFederatedCall 'Brig "on-user-deleted-connections" :> CanThrow 'InvalidCode :> "delete" :> ReqBody '[JSON] VerifyDeleteUser @@ -441,6 +458,7 @@ type AccountAPI = "get-activate" ( Summary "Activate (i.e. confirm) an email address or phone number." :> Description "See also 'POST /activate' which has a larger feature set." + :> MakesFederatedCall 'Brig "on-user-deleted-connections" :> CanThrow 'UserKeyExists :> CanThrow 'InvalidActivationCodeWrongUser :> CanThrow 'InvalidActivationCodeWrongCode @@ -466,6 +484,7 @@ type AccountAPI = :> Description "Activation only succeeds once and the number of \ \failed attempts for a valid key is limited." + :> MakesFederatedCall 'Brig "on-user-deleted-connections" :> CanThrow 'UserKeyExists :> CanThrow 'InvalidActivationCodeWrongUser :> CanThrow 'InvalidActivationCodeWrongCode @@ -579,6 +598,7 @@ type PrekeyAPI = "get-users-prekeys-client-unqualified" ( Summary "(deprecated) Get a prekey for a specific client of a user." :> Until 'V2 + :> MakesFederatedCall 'Brig "claim-prekey" :> ZUser :> "users" :> CaptureUserId "uid" @@ -589,6 +609,7 @@ type PrekeyAPI = :<|> Named "get-users-prekeys-client-qualified" ( Summary "Get a prekey for a specific client of a user." + :> MakesFederatedCall 'Brig "claim-prekey" :> ZUser :> "users" :> QualifiedCaptureUserId "uid" @@ -600,6 +621,7 @@ type PrekeyAPI = "get-users-prekey-bundle-unqualified" ( Summary "(deprecated) Get a prekey for each client of a user." :> Until 'V2 + :> MakesFederatedCall 'Brig "claim-prekey-bundle" :> ZUser :> "users" :> CaptureUserId "uid" @@ -609,6 +631,7 @@ type PrekeyAPI = :<|> Named "get-users-prekey-bundle-qualified" ( Summary "Get a prekey for each client of a user." + :> MakesFederatedCall 'Brig "claim-prekey-bundle" :> ZUser :> "users" :> QualifiedCaptureUserId "uid" @@ -634,6 +657,7 @@ type PrekeyAPI = "Given a map of domain to (map of user IDs to client IDs) return a \ \prekey for each one. You can't request information for more users than \ \maximum conversation size." + :> MakesFederatedCall 'Brig "claim-multi-prekey-bundle" :> ZUser :> "users" :> "list-prekeys" @@ -650,6 +674,7 @@ type UserClientAPI = Named "add-client" ( Summary "Register a new client" + :> MakesFederatedCall 'Brig "on-user-deleted-connections" :> CanThrow 'TooManyClients :> CanThrow 'MissingAuth :> CanThrow 'MalformedPrekeys @@ -785,6 +810,7 @@ type ClientAPI = "get-user-clients-unqualified" ( Summary "Get all of a user's clients" :> Until 'V2 + :> MakesFederatedCall 'Brig "get-user-clients" :> "users" :> CaptureUserId "uid" :> "clients" @@ -793,6 +819,7 @@ type ClientAPI = :<|> Named "get-user-clients-qualified" ( Summary "Get all of a user's clients" + :> MakesFederatedCall 'Brig "get-user-clients" :> "users" :> QualifiedCaptureUserId "uid" :> "clients" @@ -802,6 +829,7 @@ type ClientAPI = "get-user-client-unqualified" ( Summary "Get a specific client of a user" :> Until 'V2 + :> MakesFederatedCall 'Brig "get-user-clients" :> "users" :> CaptureUserId "uid" :> "clients" @@ -811,6 +839,7 @@ type ClientAPI = :<|> Named "get-user-client-qualified" ( Summary "Get a specific client of a user" + :> MakesFederatedCall 'Brig "get-user-clients" :> "users" :> QualifiedCaptureUserId "uid" :> "clients" @@ -821,6 +850,7 @@ type ClientAPI = "list-clients-bulk" ( Summary "List all clients for a set of user ids" :> Until 'V2 + :> MakesFederatedCall 'Brig "get-user-clients" :> ZUser :> "users" :> "list-clients" @@ -831,6 +861,7 @@ type ClientAPI = "list-clients-bulk-v2" ( Summary "List all clients for a set of user ids" :> Until 'V2 + :> MakesFederatedCall 'Brig "get-user-clients" :> ZUser :> "users" :> "list-clients" @@ -842,6 +873,7 @@ type ClientAPI = "list-clients-bulk@v2" ( Summary "List all clients for a set of user ids" :> From 'V2 + :> MakesFederatedCall 'Brig "get-user-clients" :> ZUser :> "users" :> "list-clients" @@ -862,6 +894,7 @@ type ConnectionAPI = "create-connection-unqualified" ( Summary "Create a connection to another user" :> Until 'V2 + :> MakesFederatedCall 'Brig "send-connection-action" :> CanThrow 'MissingLegalholdConsent :> CanThrow 'InvalidUser :> CanThrow 'ConnectionLimitReached @@ -884,6 +917,7 @@ type ConnectionAPI = :<|> Named "create-connection" ( Summary "Create a connection to another user" + :> MakesFederatedCall 'Brig "send-connection-action" :> CanThrow 'MissingLegalholdConsent :> CanThrow 'InvalidUser :> CanThrow 'ConnectionLimitReached @@ -962,6 +996,7 @@ type ConnectionAPI = "update-connection-unqualified" ( Summary "Update a connection to another user" :> Until 'V2 + :> MakesFederatedCall 'Brig "send-connection-action" :> CanThrow 'MissingLegalholdConsent :> CanThrow 'InvalidUser :> CanThrow 'ConnectionLimitReached @@ -989,6 +1024,7 @@ type ConnectionAPI = Named "update-connection" ( Summary "Update a connection to another user" + :> MakesFederatedCall 'Brig "send-connection-action" :> CanThrow 'MissingLegalholdConsent :> CanThrow 'InvalidUser :> CanThrow 'ConnectionLimitReached @@ -1009,6 +1045,8 @@ type ConnectionAPI = :<|> Named "search-contacts" ( Summary "Search for users" + :> MakesFederatedCall 'Brig "get-users-by-ids" + :> MakesFederatedCall 'Brig "search-users" :> ZUser :> "search" :> "contacts" @@ -1087,6 +1125,7 @@ type MLSKeyPackageAPI = ( "self" :> Summary "Upload a fresh batch of key packages" :> Description "The request body should be a json object containing a list of base64-encoded key packages." + :> ZLocalUser :> CanThrow 'MLSProtocolError :> CanThrow 'MLSIdentityMismatch :> CaptureClientId "client" @@ -1097,6 +1136,8 @@ type MLSKeyPackageAPI = "mls-key-packages-claim" ( "claim" :> Summary "Claim one key package for each client of the given user" + :> MakesFederatedCall 'Brig "claim-key-packages" + :> ZLocalUser :> QualifiedCaptureUserId "user" :> QueryParam' [ Optional, @@ -1110,6 +1151,7 @@ type MLSKeyPackageAPI = :<|> Named "mls-key-packages-count" ( "self" + :> ZLocalUser :> CaptureClientId "client" :> "count" :> Summary "Return the number of unused key packages for the given client" @@ -1178,7 +1220,7 @@ type SearchAPI = (SearchResult TeamContact) ) -type MLSAPI = LiftNamed (ZLocalUser :> "mls" :> MLSKeyPackageAPI) +type MLSAPI = LiftNamed ("mls" :> MLSKeyPackageAPI) type AuthAPI = Named @@ -1190,6 +1232,7 @@ type AuthAPI = \ Every other combination is invalid.\ \ Access tokens can be given as query parameter or authorisation\ \ header, with the latter being preferred." + :> MakesFederatedCall 'Brig "on-user-deleted-connections" :> QueryParam "client_id" ClientId :> Cookies '["zuid" ::: SomeUserToken] :> Bearer SomeAccessToken @@ -1220,6 +1263,7 @@ type AuthAPI = ( "login" :> Summary "Authenticate a user to obtain a cookie and first access token" :> Description "Logins are throttled at the server's discretion" + :> MakesFederatedCall 'Brig "on-user-deleted-connections" :> ReqBody '[JSON] Login :> QueryParam' [ Optional, diff --git a/libs/wire-api/src/Wire/API/Routes/Public/Cargohold.hs b/libs/wire-api/src/Wire/API/Routes/Public/Cargohold.hs index ea98f7f0ca..f31683711f 100644 --- a/libs/wire-api/src/Wire/API/Routes/Public/Cargohold.hs +++ b/libs/wire-api/src/Wire/API/Routes/Public/Cargohold.hs @@ -30,6 +30,7 @@ import URI.ByteString import Wire.API.Asset import Wire.API.Error import Wire.API.Error.Cargohold +import Wire.API.MakesFederatedCall import Wire.API.Routes.AssetBody import Wire.API.Routes.MultiVerb import Wire.API.Routes.Public @@ -169,6 +170,8 @@ type QualifiedAPI = :> Description "**Note**: local assets result in a redirect, \ \while remote assets are streamed directly." + :> MakesFederatedCall 'Cargohold "get-asset" + :> MakesFederatedCall 'Cargohold "stream-asset" :> ZLocalUser :> "assets" :> "v4" @@ -276,6 +279,8 @@ type MainAPI = :> Description "**Note**: local assets result in a redirect, \ \while remote assets are streamed directly." + :> MakesFederatedCall 'Cargohold "get-asset" + :> MakesFederatedCall 'Cargohold "stream-asset" :> ZLocalUser :> "assets" :> QualifiedCapture "key" AssetKey diff --git a/libs/wire-api/src/Wire/API/Routes/Public/Galley/Bot.hs b/libs/wire-api/src/Wire/API/Routes/Public/Galley/Bot.hs index fddc356beb..2c4752fda4 100644 --- a/libs/wire-api/src/Wire/API/Routes/Public/Galley/Bot.hs +++ b/libs/wire-api/src/Wire/API/Routes/Public/Galley/Bot.hs @@ -21,6 +21,7 @@ import Servant hiding (WithStatus) import Servant.Swagger.Internal.Orphans () import Wire.API.Error import Wire.API.Error.Galley +import Wire.API.MakesFederatedCall import Wire.API.Message import Wire.API.Routes.MultiVerb import Wire.API.Routes.Named @@ -30,7 +31,9 @@ import Wire.API.Routes.Public.Galley.Messaging type BotAPI = Named "post-bot-message-unqualified" - ( ZBot + ( MakesFederatedCall 'Galley "on-message-sent" + :> MakesFederatedCall 'Brig "get-user-clients" + :> ZBot :> ZConversation :> CanThrow 'ConvNotFound :> "bot" diff --git a/libs/wire-api/src/Wire/API/Routes/Public/Galley/Conversation.hs b/libs/wire-api/src/Wire/API/Routes/Public/Galley/Conversation.hs index 023963c96c..3c877fe475 100644 --- a/libs/wire-api/src/Wire/API/Routes/Public/Galley/Conversation.hs +++ b/libs/wire-api/src/Wire/API/Routes/Public/Galley/Conversation.hs @@ -32,6 +32,7 @@ import Wire.API.Error.Galley import Wire.API.Event.Conversation import Wire.API.MLS.PublicGroupState import Wire.API.MLS.Servant +import Wire.API.MakesFederatedCall import Wire.API.Routes.MultiVerb import Wire.API.Routes.Named import Wire.API.Routes.Public @@ -124,6 +125,7 @@ type ConversationAPI = :<|> Named "get-conversation" ( Summary "Get a conversation by ID" + :> MakesFederatedCall 'Galley "get-conversations" :> CanThrow 'ConvNotFound :> CanThrow 'ConvAccessDenied :> ZLocalUser @@ -145,6 +147,7 @@ type ConversationAPI = :<|> Named "get-group-info" ( Summary "Get MLS group information" + :> MakesFederatedCall 'Galley "query-group-info" :> CanThrow 'ConvNotFound :> CanThrow 'MLSMissingGroupInfo :> CanThrow 'MLSNotEnabled @@ -251,6 +254,7 @@ type ConversationAPI = :<|> Named "list-conversations@v1" ( Summary "Get conversation metadata for a list of conversation ids" + :> MakesFederatedCall 'Galley "get-conversations" :> Until 'V2 :> ZLocalUser :> "conversations" @@ -262,6 +266,7 @@ type ConversationAPI = :<|> Named "list-conversations@v2" ( Summary "Get conversation metadata for a list of conversation ids" + :> MakesFederatedCall 'Galley "get-conversations" :> From 'V2 :> Until 'V3 :> ZLocalUser @@ -281,6 +286,7 @@ type ConversationAPI = :<|> Named "list-conversations" ( Summary "Get conversation metadata for a list of conversation ids" + :> MakesFederatedCall 'Galley "get-conversations" :> From 'V3 :> ZLocalUser :> "conversations" @@ -308,6 +314,7 @@ type ConversationAPI = :<|> Named "create-group-conversation@v2" ( Summary "Create a new conversation" + :> MakesFederatedCall 'Galley "on-conversation-created" :> Until 'V3 :> CanThrow 'ConvAccessDenied :> CanThrow 'MLSNonEmptyMemberList @@ -326,6 +333,7 @@ type ConversationAPI = :<|> Named "create-group-conversation" ( Summary "Create a new conversation" + :> MakesFederatedCall 'Galley "on-conversation-created" :> From 'V3 :> CanThrow 'ConvAccessDenied :> CanThrow 'MLSNonEmptyMemberList @@ -381,6 +389,7 @@ type ConversationAPI = :<|> Named "create-one-to-one-conversation@v2" ( Summary "Create a 1:1 conversation" + :> MakesFederatedCall 'Galley "on-conversation-created" :> Until 'V3 :> CanThrow 'ConvAccessDenied :> CanThrow 'InvalidOperation @@ -401,6 +410,7 @@ type ConversationAPI = :<|> Named "create-one-to-one-conversation" ( Summary "Create a 1:1 conversation" + :> MakesFederatedCall 'Galley "on-conversation-created" :> From 'V3 :> CanThrow 'ConvAccessDenied :> CanThrow 'InvalidOperation @@ -423,6 +433,9 @@ type ConversationAPI = :<|> Named "add-members-to-conversation-unqualified" ( Summary "Add members to an existing conversation (deprecated)" + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> Until 'V2 :> CanThrow ('ActionDenied 'AddConversationMember) :> CanThrow ('ActionDenied 'LeaveConversation) @@ -444,6 +457,9 @@ type ConversationAPI = :<|> Named "add-members-to-conversation-unqualified2" ( Summary "Add qualified members to an existing conversation." + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> Until 'V2 :> CanThrow ('ActionDenied 'AddConversationMember) :> CanThrow ('ActionDenied 'LeaveConversation) @@ -466,6 +482,9 @@ type ConversationAPI = :<|> Named "add-members-to-conversation" ( Summary "Add qualified members to an existing conversation." + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> From 'V2 :> CanThrow ('ActionDenied 'AddConversationMember) :> CanThrow ('ActionDenied 'LeaveConversation) @@ -489,6 +508,8 @@ type ConversationAPI = :<|> Named "join-conversation-by-id-unqualified" ( Summary "Join a conversation by its ID (if link access enabled)" + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> CanThrow 'ConvAccessDenied :> CanThrow 'ConvNotFound :> CanThrow 'InvalidOperation @@ -509,6 +530,8 @@ type ConversationAPI = "Join a conversation using a reusable code.\ \If the guest links team feature is disabled, this will fail with 409 GuestLinksDisabled.\ \Note that this is currently inconsistent (for backwards compatibility reasons) with `POST /conversations/code-check` which responds with 404 CodeNotFound if guest links are disabled." + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> CanThrow 'CodeNotFound :> CanThrow 'ConvAccessDenied :> CanThrow 'ConvNotFound @@ -620,6 +643,7 @@ type ConversationAPI = :<|> Named "member-typing-qualified" ( Summary "Sending typing notifications" + :> MakesFederatedCall 'Galley "on-typing-indicator-updated" :> CanThrow 'ConvNotFound :> ZLocalUser :> ZConn @@ -634,6 +658,10 @@ type ConversationAPI = :<|> Named "remove-member-unqualified" ( Summary "Remove a member from a conversation (deprecated)" + :> MakesFederatedCall 'Galley "leave-conversation" + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> Until 'V2 :> ZLocalUser :> ZConn @@ -651,6 +679,10 @@ type ConversationAPI = :<|> Named "remove-member" ( Summary "Remove a member from a conversation" + :> MakesFederatedCall 'Galley "leave-conversation" + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> ZLocalUser :> ZConn :> CanThrow ('ActionDenied 'RemoveConversationMember) @@ -668,6 +700,9 @@ type ConversationAPI = "update-other-member-unqualified" ( Summary "Update membership of the specified user (deprecated)" :> Description "Use `PUT /conversations/:cnv_domain/:cnv/members/:usr_domain/:usr` instead" + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> ZLocalUser :> ZConn :> CanThrow 'ConvNotFound @@ -690,6 +725,9 @@ type ConversationAPI = "update-other-member" ( Summary "Update membership of the specified user" :> Description "**Note**: at least one field has to be provided." + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> ZLocalUser :> ZConn :> CanThrow 'ConvNotFound @@ -714,6 +752,9 @@ type ConversationAPI = "update-conversation-name-deprecated" ( Summary "Update conversation name (deprecated)" :> Description "Use `/conversations/:domain/:conv/name` instead." + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> CanThrow ('ActionDenied 'ModifyConversationName) :> CanThrow 'ConvNotFound :> CanThrow 'InvalidOperation @@ -732,6 +773,9 @@ type ConversationAPI = "update-conversation-name-unqualified" ( Summary "Update conversation name (deprecated)" :> Description "Use `/conversations/:domain/:conv/name` instead." + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> CanThrow ('ActionDenied 'ModifyConversationName) :> CanThrow 'ConvNotFound :> CanThrow 'InvalidOperation @@ -750,6 +794,9 @@ type ConversationAPI = :<|> Named "update-conversation-name" ( Summary "Update conversation name" + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> CanThrow ('ActionDenied 'ModifyConversationName) :> CanThrow 'ConvNotFound :> CanThrow 'InvalidOperation @@ -771,6 +818,9 @@ type ConversationAPI = "update-conversation-message-timer-unqualified" ( Summary "Update the message timer for a conversation (deprecated)" :> Description "Use `/conversations/:domain/:cnv/message-timer` instead." + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> ZLocalUser :> ZConn :> CanThrow ('ActionDenied 'ModifyConversationMessageTimer) @@ -790,6 +840,9 @@ type ConversationAPI = :<|> Named "update-conversation-message-timer" ( Summary "Update the message timer for a conversation" + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> ZLocalUser :> ZConn :> CanThrow ('ActionDenied 'ModifyConversationMessageTimer) @@ -812,6 +865,10 @@ type ConversationAPI = "update-conversation-receipt-mode-unqualified" ( Summary "Update receipt mode for a conversation (deprecated)" :> Description "Use `PUT /conversations/:domain/:cnv/receipt-mode` instead." + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" + :> MakesFederatedCall 'Galley "update-conversation" :> ZLocalUser :> ZConn :> CanThrow ('ActionDenied 'ModifyConversationReceiptMode) @@ -831,6 +888,10 @@ type ConversationAPI = :<|> Named "update-conversation-receipt-mode" ( Summary "Update receipt mode for a conversation" + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" + :> MakesFederatedCall 'Galley "update-conversation" :> ZLocalUser :> ZConn :> CanThrow ('ActionDenied 'ModifyConversationReceiptMode) @@ -853,6 +914,9 @@ type ConversationAPI = :<|> Named "update-conversation-access-unqualified" ( Summary "Update access modes for a conversation (deprecated)" + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> Until 'V3 :> Description "Use PUT `/conversations/:domain/:cnv/access` instead." :> ZLocalUser @@ -876,6 +940,9 @@ type ConversationAPI = :<|> Named "update-conversation-access@v2" ( Summary "Update access modes for a conversation" + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> Until 'V3 :> ZLocalUser :> ZConn @@ -898,6 +965,9 @@ type ConversationAPI = :<|> Named "update-conversation-access" ( Summary "Update access modes for a conversation" + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> From 'V3 :> ZLocalUser :> ZConn diff --git a/libs/wire-api/src/Wire/API/Routes/Public/Galley/Feature.hs b/libs/wire-api/src/Wire/API/Routes/Public/Galley/Feature.hs index f52fd7b183..853884e30f 100644 --- a/libs/wire-api/src/Wire/API/Routes/Public/Galley/Feature.hs +++ b/libs/wire-api/src/Wire/API/Routes/Public/Galley/Feature.hs @@ -21,9 +21,11 @@ import Data.Id import GHC.TypeLits import Servant hiding (WithStatus) import Servant.Swagger.Internal.Orphans () +import Wire.API.ApplyMods import Wire.API.Conversation.Role import Wire.API.Error import Wire.API.Error.Galley +import Wire.API.MakesFederatedCall import Wire.API.Routes.MultiVerb import Wire.API.Routes.Named import Wire.API.Routes.Public @@ -35,6 +37,10 @@ type FeatureAPI = FeatureStatusGet SSOConfig :<|> FeatureStatusGet LegalholdConfig :<|> FeatureStatusPut + '[ MakesFederatedCall 'Galley "on-conversation-updated", + MakesFederatedCall 'Galley "on-mls-message-sent", + MakesFederatedCall 'Galley "on-new-remote-conversation" + ] '( 'ActionDenied 'RemoveConversationMember, '( AuthenticationError, '( 'CannotEnableLegalHoldServiceLargeTeam, @@ -52,7 +58,7 @@ type FeatureAPI = ) LegalholdConfig :<|> FeatureStatusGet SearchVisibilityAvailableConfig - :<|> FeatureStatusPut '() SearchVisibilityAvailableConfig + :<|> FeatureStatusPut '[] '() SearchVisibilityAvailableConfig :<|> FeatureStatusDeprecatedGet "This endpoint is potentially used by the old Android client. It is not used by iOS, team management, or webapp as of June 2022" SearchVisibilityAvailableConfig :<|> FeatureStatusDeprecatedPut "This endpoint is potentially used by the old Android client. It is not used by iOS, team management, or webapp as of June 2022" SearchVisibilityAvailableConfig :<|> SearchVisibilityGet @@ -62,23 +68,23 @@ type FeatureAPI = :<|> FeatureStatusGet DigitalSignaturesConfig :<|> FeatureStatusDeprecatedGet "The usage of this endpoint was removed in iOS in version 3.101. It is potentially used by the old Android client. It is not used by team management, or webapp as of June 2022" DigitalSignaturesConfig :<|> FeatureStatusGet AppLockConfig - :<|> FeatureStatusPut '() AppLockConfig + :<|> FeatureStatusPut '[] '() AppLockConfig :<|> FeatureStatusGet FileSharingConfig - :<|> FeatureStatusPut '() FileSharingConfig + :<|> FeatureStatusPut '[] '() FileSharingConfig :<|> FeatureStatusGet ClassifiedDomainsConfig :<|> FeatureStatusGet ConferenceCallingConfig :<|> FeatureStatusGet SelfDeletingMessagesConfig - :<|> FeatureStatusPut '() SelfDeletingMessagesConfig + :<|> FeatureStatusPut '[] '() SelfDeletingMessagesConfig :<|> FeatureStatusGet GuestLinksConfig - :<|> FeatureStatusPut '() GuestLinksConfig + :<|> FeatureStatusPut '[] '() GuestLinksConfig :<|> FeatureStatusGet SndFactorPasswordChallengeConfig - :<|> FeatureStatusPut '() SndFactorPasswordChallengeConfig + :<|> FeatureStatusPut '[] '() SndFactorPasswordChallengeConfig :<|> FeatureStatusGet MLSConfig - :<|> FeatureStatusPut '() MLSConfig + :<|> FeatureStatusPut '[] '() MLSConfig :<|> FeatureStatusGet ExposeInvitationURLsToTeamAdminConfig - :<|> FeatureStatusPut '() ExposeInvitationURLsToTeamAdminConfig + :<|> FeatureStatusPut '[] '() ExposeInvitationURLsToTeamAdminConfig :<|> FeatureStatusGet SearchVisibilityInboundConfig - :<|> FeatureStatusPut '() SearchVisibilityInboundConfig + :<|> FeatureStatusPut '[] '() SearchVisibilityInboundConfig :<|> AllFeatureConfigsUserGet :<|> AllFeatureConfigsTeamGet :<|> FeatureConfigDeprecatedGet "The usage of this endpoint was removed in iOS in version 3.101. It is not used by team management, or webapp, and is potentially used by the old Android client as of June 2022" LegalholdConfig @@ -100,10 +106,10 @@ type FeatureStatusGet f = '("get", f) (ZUser :> FeatureStatusBaseGet f) -type FeatureStatusPut errs f = +type FeatureStatusPut segs errs f = Named '("put", f) - (ZUser :> FeatureStatusBasePutPublic errs f) + (ApplyMods segs (ZUser :> FeatureStatusBasePutPublic errs f)) type FeatureStatusDeprecatedGet d f = Named diff --git a/libs/wire-api/src/Wire/API/Routes/Public/Galley/LegalHold.hs b/libs/wire-api/src/Wire/API/Routes/Public/Galley/LegalHold.hs index 0c1ae5b2f1..82318d9213 100644 --- a/libs/wire-api/src/Wire/API/Routes/Public/Galley/LegalHold.hs +++ b/libs/wire-api/src/Wire/API/Routes/Public/Galley/LegalHold.hs @@ -25,6 +25,7 @@ import Servant.Swagger.Internal.Orphans () import Wire.API.Conversation.Role import Wire.API.Error import Wire.API.Error.Galley +import Wire.API.MakesFederatedCall import Wire.API.Routes.MultiVerb import Wire.API.Routes.Named import Wire.API.Routes.Public @@ -62,6 +63,9 @@ type LegalHoldAPI = :<|> Named "delete-legal-hold-settings" ( Summary "Delete legal hold service settings" + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> CanThrow AuthenticationError :> CanThrow OperationDenied :> CanThrow 'NotATeamMember @@ -98,6 +102,9 @@ type LegalHoldAPI = :<|> Named "consent-to-legal-hold" ( Summary "Consent to legal hold" + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> CanThrow ('ActionDenied 'RemoveConversationMember) :> CanThrow 'InvalidOperation :> CanThrow 'TeamMemberNotFound @@ -113,6 +120,9 @@ type LegalHoldAPI = :<|> Named "request-legal-hold-device" ( Summary "Request legal hold device" + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> CanThrow ('ActionDenied 'RemoveConversationMember) :> CanThrow 'NotATeamMember :> CanThrow OperationDenied @@ -141,6 +151,9 @@ type LegalHoldAPI = :<|> Named "disable-legal-hold-for-user" ( Summary "Disable legal hold for user" + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> CanThrow AuthenticationError :> CanThrow ('ActionDenied 'RemoveConversationMember) :> CanThrow 'NotATeamMember @@ -167,6 +180,9 @@ type LegalHoldAPI = :<|> Named "approve-legal-hold-device" ( Summary "Approve legal hold device" + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> CanThrow AuthenticationError :> CanThrow 'AccessDenied :> CanThrow ('ActionDenied 'RemoveConversationMember) diff --git a/libs/wire-api/src/Wire/API/Routes/Public/Galley/MLS.hs b/libs/wire-api/src/Wire/API/Routes/Public/Galley/MLS.hs index 09dbc3c77d..2d6a25e5b0 100644 --- a/libs/wire-api/src/Wire/API/Routes/Public/Galley/MLS.hs +++ b/libs/wire-api/src/Wire/API/Routes/Public/Galley/MLS.hs @@ -28,6 +28,7 @@ import Wire.API.MLS.Message import Wire.API.MLS.Serialisation import Wire.API.MLS.Servant import Wire.API.MLS.Welcome +import Wire.API.MakesFederatedCall import Wire.API.Routes.MultiVerb import Wire.API.Routes.Named import Wire.API.Routes.Public @@ -37,9 +38,11 @@ type MLSMessagingAPI = Named "mls-welcome-message" ( Summary "Post an MLS welcome message" + :> MakesFederatedCall 'Galley "mls-welcome" :> CanThrow 'MLSKeyPackageRefNotFound :> CanThrow 'MLSNotEnabled :> "welcome" + :> ZLocalUser :> ZConn :> ReqBody '[MLS] (RawMLS Welcome) :> MultiVerb1 'POST '[JSON] (RespondEmpty 201 "Welcome message sent") @@ -47,6 +50,11 @@ type MLSMessagingAPI = :<|> Named "mls-message-v1" ( Summary "Post an MLS message" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "send-mls-message" + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" + :> MakesFederatedCall 'Brig "get-mls-clients" :> Until 'V2 :> CanThrow 'ConvAccessDenied :> CanThrow 'ConvMemberNotFound @@ -68,6 +76,7 @@ type MLSMessagingAPI = :> CanThrow 'MissingLegalholdConsent :> CanThrow MLSProposalFailure :> "messages" + :> ZLocalUser :> ZOptClient :> ZConn :> ReqBody '[MLS] (RawMLS SomeMessage) @@ -76,6 +85,11 @@ type MLSMessagingAPI = :<|> Named "mls-message" ( Summary "Post an MLS message" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "send-mls-message" + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" + :> MakesFederatedCall 'Brig "get-mls-clients" :> From 'V2 :> CanThrow 'ConvAccessDenied :> CanThrow 'ConvMemberNotFound @@ -97,6 +111,7 @@ type MLSMessagingAPI = :> CanThrow 'MissingLegalholdConsent :> CanThrow MLSProposalFailure :> "messages" + :> ZLocalUser :> ZOptClient :> ZConn :> ReqBody '[MLS] (RawMLS SomeMessage) @@ -105,6 +120,12 @@ type MLSMessagingAPI = :<|> Named "mls-commit-bundle" ( Summary "Post a MLS CommitBundle" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "mls-welcome" + :> MakesFederatedCall 'Galley "send-mls-commit-bundle" + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" + :> MakesFederatedCall 'Brig "get-mls-clients" :> From 'V3 :> CanThrow 'ConvAccessDenied :> CanThrow 'ConvMemberNotFound @@ -127,6 +148,7 @@ type MLSMessagingAPI = :> CanThrow 'MissingLegalholdConsent :> CanThrow MLSProposalFailure :> "commit-bundles" + :> ZLocalUser :> ZOptClient :> ZConn :> ReqBody '[CommitBundleMimeType] CommitBundle @@ -137,7 +159,8 @@ type MLSMessagingAPI = ( Summary "Get public keys used by the backend to sign external proposals" :> CanThrow 'MLSNotEnabled :> "public-keys" + :> ZLocalUser :> MultiVerb1 'GET '[JSON] (Respond 200 "Public keys" MLSPublicKeys) ) -type MLSAPI = LiftNamed (ZLocalUser :> "mls" :> MLSMessagingAPI) +type MLSAPI = LiftNamed ("mls" :> MLSMessagingAPI) diff --git a/libs/wire-api/src/Wire/API/Routes/Public/Galley/Messaging.hs b/libs/wire-api/src/Wire/API/Routes/Public/Galley/Messaging.hs index 1e982f96e6..eb2f408dd5 100644 --- a/libs/wire-api/src/Wire/API/Routes/Public/Galley/Messaging.hs +++ b/libs/wire-api/src/Wire/API/Routes/Public/Galley/Messaging.hs @@ -26,6 +26,7 @@ import Servant.Swagger.Internal.Orphans () import Wire.API.Error import qualified Wire.API.Error.Brig as BrigError import Wire.API.Error.Galley +import Wire.API.MakesFederatedCall import Wire.API.Message import Wire.API.Routes.MultiVerb import Wire.API.Routes.Named @@ -38,6 +39,8 @@ type MessagingAPI = "post-otr-message-unqualified" ( Summary "Post an encrypted message to a conversation (accepts JSON or Protobuf)" :> Description PostOtrDescriptionUnqualified + :> MakesFederatedCall 'Galley "on-message-sent" + :> MakesFederatedCall 'Brig "get-user-clients" :> ZLocalUser :> ZConn :> "conversations" @@ -78,6 +81,9 @@ type MessagingAPI = "post-proteus-message" ( Summary "Post an encrypted message to a conversation (accepts only Protobuf)" :> Description PostOtrDescription + :> MakesFederatedCall 'Brig "get-user-clients" + :> MakesFederatedCall 'Galley "on-message-sent" + :> MakesFederatedCall 'Galley "send-message" :> ZLocalUser :> ZConn :> "conversations" diff --git a/libs/wire-api/src/Wire/API/Routes/Public/Galley/TeamConversation.hs b/libs/wire-api/src/Wire/API/Routes/Public/Galley/TeamConversation.hs index ce00269f8a..76753f48f2 100644 --- a/libs/wire-api/src/Wire/API/Routes/Public/Galley/TeamConversation.hs +++ b/libs/wire-api/src/Wire/API/Routes/Public/Galley/TeamConversation.hs @@ -23,6 +23,7 @@ import Servant.Swagger.Internal.Orphans () import Wire.API.Conversation.Role import Wire.API.Error import Wire.API.Error.Galley +import Wire.API.MakesFederatedCall import Wire.API.Routes.MultiVerb import Wire.API.Routes.Named import Wire.API.Routes.Public @@ -67,6 +68,9 @@ type TeamConversationAPI = :<|> Named "delete-team-conversation" ( Summary "Remove a team conversation" + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-mls-message-sent" + :> MakesFederatedCall 'Galley "on-new-remote-conversation" :> CanThrow ('ActionDenied 'DeleteConversation) :> CanThrow 'ConvNotFound :> CanThrow 'InvalidOperation diff --git a/libs/wire-api/src/Wire/API/Routes/Public/Proxy.hs b/libs/wire-api/src/Wire/API/Routes/Public/Proxy.hs new file mode 100644 index 0000000000..8c28f7c6b4 --- /dev/null +++ b/libs/wire-api/src/Wire/API/Routes/Public/Proxy.hs @@ -0,0 +1,62 @@ +-- This file is part of the Wire Server implementation. +-- +-- Copyright (C) 2022 Wire Swiss GmbH +-- +-- This program is free software: you can redistribute it and/or modify it under +-- the terms of the GNU Affero General Public License as published by the Free +-- Software Foundation, either version 3 of the License, or (at your option) any +-- later version. +-- +-- This program is distributed in the hope that it will be useful, but WITHOUT +-- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS +-- FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more +-- details. +-- +-- You should have received a copy of the GNU Affero General Public License along +-- with this program. If not, see . + +module Wire.API.Routes.Public.Proxy where + +import Data.SOP +import qualified Data.Swagger as Swagger +import Servant +import Servant.API.Extended.RawM (RawM) +import Servant.Swagger +import Wire.API.Routes.Named + +type ProxyAPI = + ProxyAPIRoute "giphy-path" ("giphy" :> "v1" :> "gifs" :> RawM) + :<|> ProxyAPIRoute "youtube-path" ("youtube" :> "v3" :> RawM) + :<|> ProxyAPIRoute "gmaps-static" ("googlemaps" :> "api" :> "staticmap" :> RawM) + :<|> ProxyAPIRoute "gmaps-path" ("googlemaps" :> "maps" :> "api" :> "geocode" :> RawM) + +type ProxyAPIRoute name path = Named name (Summary (ProxyAPISummary name) :> "proxy" :> path) + +-- | API docs: if we want to make these longer, they won't clutter the routes above +-- that they document. +-- +-- youtube, google maps are only supported for old android. there is no strong reason to end +-- support at any particular version, except the hope that old android won't need to support +-- V4, and if nobody uses it, we shouldn't serve it. if you are a wire employee, see +-- https://wearezeta.atlassian.net/wiki/spaces/ENGINEERIN/pages/685867582/Proxy+for+3rd+party+services +-- for discussion. +type family ProxyAPISummary name where + ProxyAPISummary "giphy-path" = + "proxy: `get /proxy/giphy/v1/gifs/:path`; see giphy API docs" + ProxyAPISummary "youtube-path" = + "[DEPRECATED] proxy: `get /proxy/youtube/v3/:path`; see youtube API docs" + ProxyAPISummary "gmaps-static" = + "[DEPRECATED] proxy: `get /proxy/googlemaps/api/staticmap`; see google maps API docs" + ProxyAPISummary "gmaps-path" = + "[DEPRECATED] proxy: `get /proxy/googlemaps/maps/api/geocode/:path`; see google maps API docs" + +-- | FUTUREWORK(fisx): (1) the verb could be added to the swagger docs in the appropriate +-- place here; it's always defined in the `Summary`, but the `RawM` doesn't allow to constrain +-- it. (2) there should be a way to make this more type-safe: `assertMethod` in +-- "Proxy.API.Public" could take a type-level string literal argument containing the method, +-- and that argument could be funnelled there from the routing table somehow: `"spotify" :> +-- "api" :> "token" :> OnlyMethod "POST" :> RawM`, and then the `ServerT` instance for +-- `OnlyMethod` requires a proxy argument in the handler of the same type. Or something. (am +-- i massifly over-engineering things here?) +swaggerDoc :: Swagger.Swagger +swaggerDoc = toSwagger (Proxy @ProxyAPI) diff --git a/libs/wire-api/src/Wire/API/Routes/Version.hs b/libs/wire-api/src/Wire/API/Routes/Version.hs index 68d46bf8ee..5586be14ba 100644 --- a/libs/wire-api/src/Wire/API/Routes/Version.hs +++ b/libs/wire-api/src/Wire/API/Routes/Version.hs @@ -64,11 +64,7 @@ data Version = V0 | V1 | V2 | V3 instance ToSchema Version where schema = enum @Integer "Version" . mconcat $ - [ element 0 V0, - element 1 V1, - element 2 V2, - element 3 V3 - ] + (\v -> element (fromIntegral $ fromEnum v) v) <$> [minBound @Version ..] mkVersion :: Integer -> Maybe Version mkVersion n = case Aeson.fromJSON (Aeson.Number (fromIntegral n)) of diff --git a/libs/wire-api/src/Wire/API/Routes/Version/Wai.hs b/libs/wire-api/src/Wire/API/Routes/Version/Wai.hs index 25a4add2bc..545acdeae4 100644 --- a/libs/wire-api/src/Wire/API/Routes/Version/Wai.hs +++ b/libs/wire-api/src/Wire/API/Routes/Version/Wai.hs @@ -28,12 +28,12 @@ import Network.Wai.Utilities.Response import Wire.API.Routes.Version -- | Strip off version prefix. Return 404 if the version is not supported. -versionMiddleware :: Middleware -versionMiddleware app req k = case parseVersion (removeVersionHeader req) of +versionMiddleware :: Set Version -> Middleware +versionMiddleware disabledAPIVersions app req k = case parseVersion (removeVersionHeader req) of Nothing -> app req k Just (req', n) -> case mkVersion n of - Just v -> app (addVersionHeader v req') k - Nothing -> + Just v | v `notElem` disabledAPIVersions -> app (addVersionHeader v req') k + _ -> k $ errorRs' $ mkError HTTP.status404 "unsupported-version" $ diff --git a/libs/wire-api/src/Wire/API/User/Saml.hs b/libs/wire-api/src/Wire/API/User/Saml.hs index 4d3939a3f4..eebe6a1f65 100644 --- a/libs/wire-api/src/Wire/API/User/Saml.hs +++ b/libs/wire-api/src/Wire/API/User/Saml.hs @@ -38,13 +38,9 @@ import Data.Time import GHC.TypeLits (KnownSymbol, symbolVal) import GHC.Types (Symbol) import Imports -import SAML2.Util (parseURI', renderURI) -import SAML2.WebSSO (Assertion, AuthnRequest, ID, IdPId) -import qualified SAML2.WebSSO as SAML +import SAML2.WebSSO import SAML2.WebSSO.Types.TH (deriveJSONOptions) -import System.Logger.Extended (LogFormat) import URI.ByteString -import Util.Options import Web.Cookie import Wire.API.User.Orphans () @@ -87,37 +83,6 @@ substituteVar var val = substituteVar' ("$" <> var) val . substituteVar' ("%24" substituteVar' :: ST -> ST -> ST -> ST substituteVar' var val = ST.intercalate val . ST.splitOn var -type Opts = Opts' DerivedOpts - --- FUTUREWORK: Shouldn't these types be in spar, not in wire-api? -data Opts' a = Opts - { saml :: !SAML.Config, - brig :: !Endpoint, - galley :: !Endpoint, - cassandra :: !CassandraOpts, - maxttlAuthreq :: !(TTL "authreq"), - maxttlAuthresp :: !(TTL "authresp"), - -- | The maximum number of SCIM tokens that we will allow teams to have. - maxScimTokens :: !Int, - -- | The maximum size of rich info. Should be in sync with 'Brig.Types.richInfoLimit'. - richInfoLimit :: !Int, - -- | Wire/AWS specific; optional; used to discover Cassandra instance - -- IPs using describe-instances. - discoUrl :: !(Maybe Text), - logNetStrings :: !(Maybe (Last Bool)), - logFormat :: !(Maybe (Last LogFormat)), - -- , optSettings :: !Settings -- (nothing yet; see other services for what belongs in here.) - derivedOpts :: !a - } - deriving (Functor, Show, Generic) - -instance FromJSON (Opts' (Maybe ())) - -data DerivedOpts = DerivedOpts - { derivedOptsScimBaseURI :: !URI - } - deriving (Show, Generic) - -- | (seconds) newtype TTL (tablename :: Symbol) = TTL {fromTTL :: Int32} deriving (Eq, Ord, Show, Num) @@ -134,9 +99,6 @@ data TTLError = TTLTooLong String String | TTLNegative String ttlToNominalDiffTime :: TTL a -> NominalDiffTime ttlToNominalDiffTime (TTL i32) = fromIntegral i32 -maxttlAuthreqDiffTime :: Opts -> NominalDiffTime -maxttlAuthreqDiffTime = ttlToNominalDiffTime . maxttlAuthreq - data SsoSettings = SsoSettings { defaultSsoCode :: !(Maybe IdPId) } diff --git a/libs/wire-api/test/golden/testObject_Event_user_8.json b/libs/wire-api/test/golden/testObject_Event_user_8.json index cfe4ffda5b..8906b27147 100644 --- a/libs/wire-api/test/golden/testObject_Event_user_8.json +++ b/libs/wire-api/test/golden/testObject_Event_user_8.json @@ -10,7 +10,8 @@ "invite", "link" ], - "access_role": [ + "access_role": "non_activated", + "access_role_v2": [ "team_member", "guest", "service" diff --git a/libs/wire-api/wire-api.cabal b/libs/wire-api/wire-api.cabal index 1a483c53a7..15de52a729 100644 --- a/libs/wire-api/wire-api.cabal +++ b/libs/wire-api/wire-api.cabal @@ -13,6 +13,7 @@ build-type: Simple library -- cabal-fmt: expand src exposed-modules: + Wire.API.ApplyMods Wire.API.Asset Wire.API.Call.Config Wire.API.Connection @@ -39,6 +40,7 @@ library Wire.API.Event.Team Wire.API.Internal.BulkPush Wire.API.Internal.Notification + Wire.API.MakesFederatedCall Wire.API.Message Wire.API.Message.Proto Wire.API.MLS.CipherSuite @@ -106,6 +108,7 @@ library Wire.API.Routes.Public.Galley.TeamConversation Wire.API.Routes.Public.Galley.TeamMember Wire.API.Routes.Public.Gundeck + Wire.API.Routes.Public.Proxy Wire.API.Routes.Public.Spar Wire.API.Routes.Public.Util Wire.API.Routes.QualifiedCapture diff --git a/libs/zauth/README.md b/libs/zauth/README.md index ef15882c19..bd3cc12b48 100644 --- a/libs/zauth/README.md +++ b/libs/zauth/README.md @@ -7,10 +7,10 @@ version ::= "v=" Integer key-index ::= "k=" Integer (> 0) timestamp ::= "d=" Integer (POSIX timestamp, expiration time) type ::= "t=" ("a" | "u" | "b" | "p") ; access, user, bot, provider -tag ::= "l=" ("s" | "" (session or nothing)) +tag ::= "l=" ("s" | "") ; session or nothing type-specific-data ::= | | | -access-data ::= "u=" "." "c=" -user-data ::= "u=" "." "r=" +access-data ::= "u=" "." "c=" ("i=" | "") +user-data ::= "u=" "." "r=" ("i=" | "") bot-data ::= "p=" "." "b=" "." "c=" provider-data ::= "p=" ``` @@ -21,6 +21,10 @@ provider-data ::= "p=" `7B2fdkjqBm0BZEpvF_1itY-W22LM2RWLDIQgu2k7d-BJojlMfyNpVfXYPEQiWpcCztmwZO_yphgKhhtKetiuCw==.v=1.k=1.d=1409335821.t=u.l=.u=c5eda68f-93f3-4413-93fe-d45e81f8a9f9.r=bb3d1d9f` +#### User-Token (with client id) + +`vpJs7PEgwtsuzGlMY0-Vqs22s8o9ZDlp7wJrPmhCgIfg0NoTAxvxq5OtknabLMfNTEW9amn5tyeUM7tbFZABBA==.v=1.k=1.d=1466770905.t=u.l=.u=6562d941-4f40-4db4-b96e-56a06d71c2c3.r=4feacc.i=deadbeef` + ### User-Token (Session) `7CPhoJv6TOYr7epokS6S2pj0nLoV-mJ_o5iRUII3JM5jBItZzluXNNGb-u476EYQM0fpr1qUGK2eRuKCZuELBA==.v=1.k=1.d=1429832092.t=u.l=s.u=161e7fe7-9a71-4ffd-9a79-de9ee2fa178c.r=3f6a49c4` @@ -29,6 +33,10 @@ provider-data ::= "p=" `5Bdn6CnDO2yIng7_MblYFhMNEo27ESsHsZmD40fNpcTdEybk15dw7zUVOcJDeFyf6QbEsZF4ruNKRu1ICmbzCg==.v=1.k=1.d=1419834921.t=a.l=.u=c5eda68f-93f3-4413-93fe-d45e81f8a9f9.c=8875802285613998639` +#### Access-Token (with client id) + +`aEPOxMwUriGEv2qc7Pb672ygy-6VeJ-8VrX3jmwalZr7xygU4izyCWxiT7IXfybnNGIsk1FQPb0RRVPx1s2UCw==.v=1.k=1.d=1466770783.t=a.l=.u=6562d941-4f40-4db4-b96e-56a06d71c2c3.c=11019722839397809329.i=deadbeef` + # Token creation Given: diff --git a/nix/haskell-pins.nix b/nix/haskell-pins.nix index 93f2aea5d1..ff39dbda96 100644 --- a/nix/haskell-pins.nix +++ b/nix/haskell-pins.nix @@ -93,9 +93,9 @@ let }; amazonka = { src = fetchgit { - url = "https://github.com/wireapp/amazonka"; - rev = "7ced54b0396296307b9871d293cc0ac161e5743d"; - sha256 = "0md658m32zrvzc8nljn58r8iw4rqxpihgdnqrhl8vnmkq6i9np51"; + url = "https://github.com/brendanhay/amazonka"; + rev = "cfe2584aef0b03c86650372d362c74f237925d8c"; + sha256 = "sha256-ss8IuIN0BbS6LMjlaFmUdxUqQu+IHsA8ucsjxXJwbyg="; }; packages = { amazonka = "lib/amazonka"; @@ -172,6 +172,13 @@ let sha256 = "1w23yz2iiayniymk7k4g8gww7268187cayw0c8m3bz2hbnvbyfbc"; }; }; + swagger2 = { + src = fetchgit { + url = "https://github.com/wireapp/swagger2"; + rev = "ba916df2775bb38ec603b726bbebfb65a908317a"; + sha256 = "sha256-IcsrJ5ur8Zm7Xp1PQBOb+2N7T8WMI8jJ6YuDv8ypsPQ="; + }; + }; cql-io = { src = fetchgit { url = "https://gitlab.com/axeman/cql-io"; @@ -203,37 +210,13 @@ let tasty-hunit = "hunit"; }; }; - polysemy = { - src = fetchgit { - url = "https://github.com/polysemy-research/polysemy.git"; - rev = "3855786e58bf397ca8204f3a79d19c24485dabd4"; - sha256 = "sha256-4ans30VWuSMC9HNFb6FWQyc30oxJd2dmFrMGu5/dLg0="; - }; - }; - polysemy-plugin = { - src = fetchgit { - url = "https://github.com/polysemy-research/polysemy.git"; - rev = "3855786e58bf397ca8204f3a79d19c24485dabd4"; - sha256 = "sha256-4ans30VWuSMC9HNFb6FWQyc30oxJd2dmFrMGu5/dLg0="; - }; - packages = { - polysemy-plugin = "polysemy-plugin"; - }; - }; - polysemy-check = { - src = fetchgit { - url = "https://github.com/polysemy-research/polysemy-check.git"; - rev = "4c0d3ff929ae22ae68d962f7f3f7056f357bf7ac"; - sha256 = "sha256-8XeCeJWbkdqrUf6tERFMoGM8xRI5l/nKNqI810kzMs0="; - }; - }; tasty-hedgehog = { src = fetchgit { url = "https://github.com/qfpl/tasty-hedgehog"; rev = "729617f82699be189954825920d6f30985e1cfa7"; sha256 = "sha256-O81wlQbzwCOWLueDLiqf/K2g9XWvSNWgHv7IbYmLsgI="; }; - }; + }; jose = { src = fetchgit { url = "https://github.com/frasertweedale/hs-jose"; @@ -241,17 +224,6 @@ let sha256 = "sha256-SKEE9ZqhjBxHYUKQaoB4IpN4/Ui3tS4S98FgZqj7WlY="; }; }; - kind-generics = { - src = fetchgit { - url = "https://gitlab.com/trupill/kind-generics.git"; - rev = "f4ad2bcfacc9c3dcecf64c069d086926465cab2c"; - sha256 = "sha256-uvQMV8aTNyTN+ozrseohexbCneVPMO35Jf1eEhLPk78="; - }; - packages = { - kind-generics = "kind-generics"; - kind-generics-th = "kind-generics-th"; - }; - }; # This can be removed once postie 0.6.0.3 (or later) is in nixpkgs postie = { src = fetchgit { @@ -270,6 +242,26 @@ let version = "0.2.2.1"; sha256 = "sha256-TdsLB0ueaUUllLdvcGu3YNQXCfGRRk5WxP3deHEbHGI="; }; + kind-generics = { + version = "0.4.1.2"; + sha256 = "sha256-orDfC5+QXRlAMVaqAhT1Fo7Eh/AnobROWeliZqEAXZU="; + }; + kind-generics-th = { + version = "0.2.2.2"; + sha256 = "sha256-nPuRq19UGVXe4YrITAZcF+U4TUBo5APMT2Nh9NqIkxQ="; + }; + polysemy = { + version = "1.8.0.0"; + sha256 = "sha256-AdxxKWXdUjZiHLDj6iswMWpycs7mFB8eKhBR4ljF6kk="; + }; + polysemy-check = { + version = "0.9.0.1"; + sha256 = "sha256-CsL2vMxAmpvVVR/iUnZAkbcRLiy/a8ulJQ6QwtCYmRM="; + }; + polysemy-plugin = { + version = "0.4.3.1"; + sha256 = "sha256-0vkLYNZISr3fmmQvD8qdLkn2GHc80l1GzJuOmqjqXE4="; + }; singletons = { version = "2.7"; sha256 = "sha256-q7yc/wyGSyYI0KdgHgRi0WISv9WEibxQ5yM7cSjXS2s="; diff --git a/nix/local-haskell-packages.nix b/nix/local-haskell-packages.nix index 387f117aa1..aea935c787 100644 --- a/nix/local-haskell-packages.nix +++ b/nix/local-haskell-packages.nix @@ -51,6 +51,7 @@ move-team = hself.callPackage ../tools/db/move-team/default.nix { inherit gitignoreSource; }; repair-handles = hself.callPackage ../tools/db/repair-handles/default.nix { inherit gitignoreSource; }; service-backfill = hself.callPackage ../tools/db/service-backfill/default.nix { inherit gitignoreSource; }; + fedcalls = hself.callPackage ../tools/fedcalls/default.nix { inherit gitignoreSource; }; rex = hself.callPackage ../tools/rex/default.nix { inherit gitignoreSource; }; stern = hself.callPackage ../tools/stern/default.nix { inherit gitignoreSource; }; } diff --git a/services/brig/brig.cabal b/services/brig/brig.cabal index c3a03b5b4e..e5ed70769b 100644 --- a/services/brig/brig.cabal +++ b/services/brig/brig.cabal @@ -185,10 +185,11 @@ library build-depends: aeson >=2.0.1.0 - , amazonka >=1.3.7 - , amazonka-dynamodb >=1.3.7 - , amazonka-ses >=1.3.7 - , amazonka-sqs >=1.3.7 + , amazonka >=2 + , amazonka-core >=2 + , amazonka-dynamodb >=2 + , amazonka-ses >=2 + , amazonka-sqs >=2 , async >=2.1 , attoparsec >=0.12 , auto-update >=0.1 @@ -364,7 +365,7 @@ executable brig ghc-options: -O2 -Wall -Wincomplete-uni-patterns -Wincomplete-record-updates -Wpartial-fields -fwarn-tabs -optP-Wno-nonportable-include-path - -funbox-strict-fields -threaded -with-rtsopts=-N1 -with-rtsopts=-T + -funbox-strict-fields -threaded -with-rtsopts=-N -with-rtsopts=-T -rtsopts build-depends: diff --git a/services/brig/default.nix b/services/brig/default.nix index ca9e36c9e7..d1c4e8a723 100644 --- a/services/brig/default.nix +++ b/services/brig/default.nix @@ -5,6 +5,7 @@ { mkDerivation , aeson , amazonka +, amazonka-core , amazonka-dynamodb , amazonka-ses , amazonka-sqs @@ -169,6 +170,7 @@ mkDerivation { libraryHaskellDepends = [ aeson amazonka + amazonka-core amazonka-dynamodb amazonka-ses amazonka-sqs diff --git a/services/brig/src/Brig/API.hs b/services/brig/src/Brig/API.hs index f5cef37677..f9bc09e4ed 100644 --- a/services/brig/src/Brig/API.hs +++ b/services/brig/src/Brig/API.hs @@ -36,16 +36,17 @@ import Wire.Sem.Concurrency sitemap :: forall r p. - Members - '[ BlacklistPhonePrefixStore, - BlacklistStore, - GalleyProvider, - CodeStore, - Concurrency 'Unsafe, - PasswordResetStore, - UserPendingActivationStore p - ] - r => + ( Members + '[ BlacklistPhonePrefixStore, + BlacklistStore, + GalleyProvider, + CodeStore, + Concurrency 'Unsafe, + PasswordResetStore, + UserPendingActivationStore p + ] + r + ) => Routes Doc.ApiBuilder (Handler r) () sitemap = do Public.sitemap diff --git a/services/brig/src/Brig/API/Auth.hs b/services/brig/src/Brig/API/Auth.hs index cce5a9e335..b89733053e 100644 --- a/services/brig/src/Brig/API/Auth.hs +++ b/services/brig/src/Brig/API/Auth.hs @@ -43,6 +43,7 @@ import Network.HTTP.Types import Network.Wai.Utilities ((!>>)) import qualified Network.Wai.Utilities.Error as Wai import Polysemy +import Wire.API.Federation.API import Wire.API.User import Wire.API.User.Auth hiding (access) import Wire.API.User.Auth.LegalHold @@ -50,6 +51,7 @@ import Wire.API.User.Auth.ReAuth import Wire.API.User.Auth.Sso accessH :: + CallsFed 'Brig "on-user-deleted-connections" => Maybe ClientId -> [Either Text SomeUserToken] -> Maybe (Either Text SomeAccessToken) -> @@ -61,7 +63,7 @@ accessH mcid ut' mat' = do >>= either (uncurry (access mcid)) (uncurry (access mcid)) access :: - TokenPair u a => + (TokenPair u a, CallsFed 'Brig "on-user-deleted-connections") => Maybe ClientId -> NonEmpty (Token u) -> Maybe (Token a) -> @@ -76,7 +78,7 @@ sendLoginCode (SendLoginCode phone call force) = do c <- wrapClientE (Auth.sendLoginCode phone call force) !>> sendLoginCodeError pure $ LoginCodeTimeout (pendingLoginTimeout c) -login :: Member GalleyProvider r => Login -> Maybe Bool -> Handler r SomeAccess +login :: (Member GalleyProvider r, CallsFed 'Brig "on-user-deleted-connections") => Login -> Maybe Bool -> Handler r SomeAccess login l (fromMaybe False -> persist) = do let typ = if persist then PersistentCookie else SessionCookie c <- Auth.login l typ !>> loginError @@ -128,13 +130,13 @@ removeCookies :: Local UserId -> RemoveCookies -> Handler r () removeCookies lusr (RemoveCookies pw lls ids) = wrapClientE (Auth.revokeAccess (tUnqualified lusr) pw ids lls) !>> authError -legalHoldLogin :: Member GalleyProvider r => LegalHoldLogin -> Handler r SomeAccess +legalHoldLogin :: (Member GalleyProvider r, CallsFed 'Brig "on-user-deleted-connections") => LegalHoldLogin -> Handler r SomeAccess legalHoldLogin lhl = do let typ = PersistentCookie -- Session cookie isn't a supported use case here c <- Auth.legalHoldLogin lhl typ !>> legalHoldLoginError traverse mkUserTokenCookie c -ssoLogin :: SsoLogin -> Maybe Bool -> Handler r SomeAccess +ssoLogin :: CallsFed 'Brig "on-user-deleted-connections" => SsoLogin -> Maybe Bool -> Handler r SomeAccess ssoLogin l (fromMaybe False -> persist) = do let typ = if persist then PersistentCookie else SessionCookie c <- wrapHttpClientE (Auth.ssoLogin l typ) !>> loginError diff --git a/services/brig/src/Brig/API/Client.hs b/services/brig/src/Brig/API/Client.hs index 5ba34937e8..4c30599850 100644 --- a/services/brig/src/Brig/API/Client.hs +++ b/services/brig/src/Brig/API/Client.hs @@ -93,6 +93,7 @@ import Polysemy (Member, Members) import Servant (Link, ToHttpApiData (toUrlPiece)) import System.Logger.Class (field, msg, val, (~~)) import qualified System.Logger.Class as Log +import Wire.API.Federation.API import Wire.API.Federation.API.Brig (GetUserClients (GetUserClients)) import Wire.API.Federation.Error import Wire.API.MLS.Credential (ClientIdentity (..)) @@ -115,12 +116,12 @@ lookupLocalClient uid = wrapClient . Data.lookupClient uid lookupLocalClients :: UserId -> (AppT r) [Client] lookupLocalClients = wrapClient . Data.lookupClients -lookupPubClient :: Qualified UserId -> ClientId -> ExceptT ClientError (AppT r) (Maybe PubClient) +lookupPubClient :: CallsFed 'Brig "get-user-clients" => Qualified UserId -> ClientId -> ExceptT ClientError (AppT r) (Maybe PubClient) lookupPubClient qid cid = do clients <- lookupPubClients qid pure $ find ((== cid) . pubClientId) clients -lookupPubClients :: Qualified UserId -> ExceptT ClientError (AppT r) [PubClient] +lookupPubClients :: CallsFed 'Brig "get-user-clients" => Qualified UserId -> ExceptT ClientError (AppT r) [PubClient] lookupPubClients qid@(Qualified uid domain) = do getForUser <$> lookupPubClientsBulk [qid] where @@ -129,7 +130,7 @@ lookupPubClients qid@(Qualified uid domain) = do um <- userMap <$> Map.lookup domain (qualifiedUserMap qmap) Set.toList <$> Map.lookup uid um -lookupPubClientsBulk :: [Qualified UserId] -> ExceptT ClientError (AppT r) (QualifiedUserMap (Set PubClient)) +lookupPubClientsBulk :: CallsFed 'Brig "get-user-clients" => [Qualified UserId] -> ExceptT ClientError (AppT r) (QualifiedUserMap (Set PubClient)) lookupPubClientsBulk qualifiedUids = do loc <- qualifyLocal () let (localUsers, remoteUsers) = partitionQualified loc qualifiedUids @@ -145,7 +146,7 @@ lookupLocalPubClientsBulk :: [UserId] -> ExceptT ClientError (AppT r) (UserMap ( lookupLocalPubClientsBulk = lift . wrapClient . Data.lookupPubClientsBulk addClient :: - Members '[GalleyProvider] r => + (Members '[GalleyProvider] r, CallsFed 'Brig "on-user-deleted-connections") => UserId -> Maybe ConnId -> Maybe IP -> @@ -157,7 +158,7 @@ addClient = addClientWithReAuthPolicy Data.reAuthForNewClients -- a superset of the clients known to galley. addClientWithReAuthPolicy :: forall r. - Members '[GalleyProvider] r => + (Members '[GalleyProvider] r, CallsFed 'Brig "on-user-deleted-connections") => Data.ReAuthPolicy -> UserId -> Maybe ConnId -> @@ -238,6 +239,7 @@ rmClient u con clt pw = lift $ execDelete u (Just con) client claimPrekey :: + CallsFed 'Brig "claim-prekey" => LegalholdProtectee -> UserId -> Domain -> @@ -264,14 +266,15 @@ claimLocalPrekey protectee user client = do claimRemotePrekey :: ( MonadReader Env m, Log.MonadLogger m, - MonadClient m + MonadClient m, + CallsFed 'Brig "claim-prekey" ) => Qualified UserId -> ClientId -> ExceptT ClientError m (Maybe ClientPrekey) claimRemotePrekey quser client = fmapLT ClientFederationError $ Federation.claimPrekey quser client -claimPrekeyBundle :: LegalholdProtectee -> Domain -> UserId -> ExceptT ClientError (AppT r) PrekeyBundle +claimPrekeyBundle :: CallsFed 'Brig "claim-prekey-bundle" => LegalholdProtectee -> Domain -> UserId -> ExceptT ClientError (AppT r) PrekeyBundle claimPrekeyBundle protectee domain uid = do isLocalDomain <- (domain ==) <$> viewFederationDomain if isLocalDomain @@ -284,13 +287,13 @@ claimLocalPrekeyBundle protectee u = do guardLegalhold protectee (mkUserClients [(u, clients)]) PrekeyBundle u . catMaybes <$> lift (mapM (wrapHttp . Data.claimPrekey u) clients) -claimRemotePrekeyBundle :: Qualified UserId -> ExceptT ClientError (AppT r) PrekeyBundle +claimRemotePrekeyBundle :: CallsFed 'Brig "claim-prekey-bundle" => Qualified UserId -> ExceptT ClientError (AppT r) PrekeyBundle claimRemotePrekeyBundle quser = do Federation.claimPrekeyBundle quser !>> ClientFederationError claimMultiPrekeyBundles :: forall r. - Members '[Concurrency 'Unsafe] r => + (Members '[Concurrency 'Unsafe] r, CallsFed 'Brig "claim-multi-prekey-bundle") => LegalholdProtectee -> QualifiedUserClients -> ExceptT ClientError (AppT r) QualifiedUserClientPrekeyMap @@ -410,7 +413,7 @@ pubClient c = pubClientClass = clientClass c } -legalHoldClientRequested :: UserId -> LegalHoldClientRequest -> (AppT r) () +legalHoldClientRequested :: CallsFed 'Brig "on-user-deleted-connections" => UserId -> LegalHoldClientRequest -> (AppT r) () legalHoldClientRequested targetUser (LegalHoldClientRequest _requester lastPrekey') = wrapHttpClient $ Intra.onUserEvent targetUser Nothing lhClientEvent where @@ -421,7 +424,7 @@ legalHoldClientRequested targetUser (LegalHoldClientRequest _requester lastPreke lhClientEvent :: UserEvent lhClientEvent = LegalHoldClientRequested eventData -removeLegalHoldClient :: UserId -> (AppT r) () +removeLegalHoldClient :: CallsFed 'Brig "on-user-deleted-connections" => UserId -> (AppT r) () removeLegalHoldClient uid = do clients <- wrapClient $ Data.lookupClients uid -- Should only be one; but just in case we'll treat it as a list diff --git a/services/brig/src/Brig/API/Connection.hs b/services/brig/src/Brig/API/Connection.hs index e3ba7798ae..f1c54d08dc 100644 --- a/services/brig/src/Brig/API/Connection.hs +++ b/services/brig/src/Brig/API/Connection.hs @@ -60,6 +60,7 @@ import Wire.API.Connection hiding (relationWithHistory) import Wire.API.Conversation import Wire.API.Error import qualified Wire.API.Error.Brig as E +import Wire.API.Federation.API import Wire.API.Routes.Public.Util (ResponseForExistedCreated (..)) ensureIsActivated :: Local UserId -> MaybeT (AppT r) () @@ -75,7 +76,7 @@ ensureNotSameTeam self target = do throwE ConnectSameBindingTeamUsers createConnection :: - Members '[GalleyProvider] r => + (Members '[GalleyProvider] r, CallsFed 'Brig "send-connection-action") => Local UserId -> ConnId -> Qualified UserId -> @@ -210,6 +211,7 @@ checkLegalholdPolicyConflict uid1 uid2 = do oneway status2 status1 updateConnection :: + CallsFed 'Brig "send-connection-action" => Local UserId -> Qualified UserId -> Relation -> diff --git a/services/brig/src/Brig/API/Connection/Remote.hs b/services/brig/src/Brig/API/Connection/Remote.hs index 54894fb30f..4567753e68 100644 --- a/services/brig/src/Brig/API/Connection/Remote.hs +++ b/services/brig/src/Brig/API/Connection/Remote.hs @@ -39,6 +39,7 @@ import Galley.Types.Conversations.Intra (Actor (..), DesiredMembership (..), Ups import Imports import Network.Wai.Utilities.Error import Wire.API.Connection +import Wire.API.Federation.API import Wire.API.Federation.API.Brig ( NewConnectionResponse (..), RemoteConnectionAction (..), @@ -187,6 +188,7 @@ pushEvent self mzcon connection = do Intra.onConnectionEvent (tUnqualified self) mzcon event performLocalAction :: + CallsFed 'Brig "send-connection-action" => Local UserId -> Maybe ConnId -> Remote UserId -> @@ -251,6 +253,7 @@ performRemoteAction self other mconnection action = do reaction _ = Nothing createConnectionToRemoteUser :: + CallsFed 'Brig "send-connection-action" => Local UserId -> ConnId -> Remote UserId -> @@ -260,6 +263,7 @@ createConnectionToRemoteUser self zcon other = do fst <$> performLocalAction self (Just zcon) other mconnection LocalConnect updateConnectionToRemoteUser :: + CallsFed 'Brig "send-connection-action" => Local UserId -> Remote UserId -> Relation -> diff --git a/services/brig/src/Brig/API/Internal.hs b/services/brig/src/Brig/API/Internal.hs index 3f575cddf8..fe8da684e2 100644 --- a/services/brig/src/Brig/API/Internal.hs +++ b/services/brig/src/Brig/API/Internal.hs @@ -88,6 +88,7 @@ import UnliftIO.Async import Wire.API.Connection import Wire.API.Error import qualified Wire.API.Error.Brig as E +import Wire.API.Federation.API import Wire.API.MLS.Credential import Wire.API.MLS.KeyPackage import Wire.API.MLS.Serialisation @@ -162,16 +163,17 @@ mlsAPI = :<|> Named @"put-key-package-add" upsertKeyPackage accountAPI :: - Members - '[ BlacklistStore, - GalleyProvider, - UserPendingActivationStore p - ] - r => + ( Members + '[ BlacklistStore, + GalleyProvider, + UserPendingActivationStore p + ] + r + ) => ServerT BrigIRoutes.AccountAPI (Handler r) accountAPI = - Named @"createUserNoVerify" createUserNoVerify - :<|> Named @"createUserNoVerifySpar" createUserNoVerifySpar + Named @"createUserNoVerify" (callsFed createUserNoVerify) + :<|> Named @"createUserNoVerifySpar" (callsFed createUserNoVerifySpar) teamsAPI :: ServerT BrigIRoutes.TeamsAPI (Handler r) teamsAPI = Named @"updateSearchVisibilityInbound" Index.updateSearchVisibilityInbound @@ -182,10 +184,10 @@ userAPI = :<|> deleteLocale :<|> getDefaultUserLocale -authAPI :: Member GalleyProvider r => ServerT BrigIRoutes.AuthAPI (Handler r) +authAPI :: (Member GalleyProvider r) => ServerT BrigIRoutes.AuthAPI (Handler r) authAPI = - Named @"legalhold-login" legalHoldLogin - :<|> Named @"sso-login" ssoLogin + Named @"legalhold-login" (callsFed legalHoldLogin) + :<|> Named @"sso-login" (callsFed ssoLogin) :<|> Named @"login-code" getLoginCode :<|> Named @"reauthenticate" reauthenticate @@ -296,17 +298,18 @@ swaggerDocsAPI = swaggerSchemaUIServer BrigIRoutes.swaggerDoc -- Sitemap (wai-route) sitemap :: - Members - '[ CodeStore, - PasswordResetStore, - BlacklistStore, - BlacklistPhonePrefixStore, - GalleyProvider, - UserPendingActivationStore p - ] - r => + ( Members + '[ CodeStore, + PasswordResetStore, + BlacklistStore, + BlacklistPhonePrefixStore, + GalleyProvider, + UserPendingActivationStore p + ] + r + ) => Routes a (Handler r) () -sitemap = do +sitemap = unsafeCallsFed @'Brig @"on-user-deleted-connections" $ do get "/i/status" (continue $ const $ pure empty) true head "/i/status" (continue $ const $ pure empty) true @@ -468,10 +471,12 @@ sitemap = do -- | Add a client without authentication checks addClientInternalH :: - Members - '[ GalleyProvider - ] - r => + ( Members + '[ GalleyProvider + ] + r, + CallsFed 'Brig "on-user-deleted-connections" + ) => UserId ::: Maybe Bool ::: JsonRequest NewClient ::: Maybe ConnId ::: JSON -> (Handler r) Response addClientInternalH (usr ::: mSkipReAuth ::: req ::: connId ::: _) = do @@ -479,10 +484,12 @@ addClientInternalH (usr ::: mSkipReAuth ::: req ::: connId ::: _) = do setStatus status201 . json <$> addClientInternal usr mSkipReAuth new connId addClientInternal :: - Members - '[ GalleyProvider - ] - r => + ( Members + '[ GalleyProvider + ] + r, + CallsFed 'Brig "on-user-deleted-connections" + ) => UserId -> Maybe Bool -> NewClient -> @@ -494,13 +501,13 @@ addClientInternal usr mSkipReAuth new connId = do | otherwise = Data.reAuthForNewClients API.addClientWithReAuthPolicy policy usr connId Nothing new !>> clientError -legalHoldClientRequestedH :: UserId ::: JsonRequest LegalHoldClientRequest ::: JSON -> (Handler r) Response +legalHoldClientRequestedH :: (CallsFed 'Brig "on-user-deleted-connections") => UserId ::: JsonRequest LegalHoldClientRequest ::: JSON -> (Handler r) Response legalHoldClientRequestedH (targetUser ::: req ::: _) = do clientRequest <- parseJsonBody req lift $ API.legalHoldClientRequested targetUser clientRequest pure $ setStatus status200 empty -removeLegalHoldClientH :: UserId ::: JSON -> (Handler r) Response +removeLegalHoldClientH :: (CallsFed 'Brig "on-user-deleted-connections") => UserId ::: JSON -> (Handler r) Response removeLegalHoldClientH (uid ::: _) = do lift $ API.removeLegalHoldClient uid pure $ setStatus status200 empty @@ -523,12 +530,14 @@ internalListFullClients (UserSet usrs) = UserClientsFull <$> wrapClient (Data.lookupClientsBulk (Set.toList usrs)) createUserNoVerify :: - Members - '[ BlacklistStore, - GalleyProvider, - UserPendingActivationStore p - ] - r => + ( Members + '[ BlacklistStore, + GalleyProvider, + UserPendingActivationStore p + ] + r, + CallsFed 'Brig "on-user-deleted-connections" + ) => NewUser -> (Handler r) (Either RegisterError SelfProfile) createUserNoVerify uData = lift . runExceptT $ do @@ -545,10 +554,12 @@ createUserNoVerify uData = lift . runExceptT $ do pure . SelfProfile $ usr createUserNoVerifySpar :: - Members - '[ GalleyProvider - ] - r => + ( Members + '[ GalleyProvider + ] + r, + CallsFed 'Brig "on-user-deleted-connections" + ) => NewUserSpar -> (Handler r) (Either CreateUserSparError SelfProfile) createUserNoVerifySpar uData = @@ -565,7 +576,7 @@ createUserNoVerifySpar uData = in API.activate key code (Just uid) !>> CreateUserSparRegistrationError . activationErrorToRegisterError pure . SelfProfile $ usr -deleteUserNoAuthH :: UserId -> (Handler r) Response +deleteUserNoAuthH :: (CallsFed 'Brig "on-user-deleted-connections") => UserId -> (Handler r) Response deleteUserNoAuthH uid = do r <- lift $ wrapHttp $ API.ensureAccountDeleted uid case r of @@ -664,7 +675,7 @@ newtype GetPasswordResetCodeResp = GetPasswordResetCodeResp (PasswordResetKey, P instance ToJSON GetPasswordResetCodeResp where toJSON (GetPasswordResetCodeResp (k, c)) = object ["key" .= k, "code" .= c] -changeAccountStatusH :: UserId ::: JsonRequest AccountStatusUpdate -> (Handler r) Response +changeAccountStatusH :: (CallsFed 'Brig "on-user-deleted-connections") => UserId ::: JsonRequest AccountStatusUpdate -> (Handler r) Response changeAccountStatusH (usr ::: req) = do status <- suStatus <$> parseJsonBody req wrapHttpClientE (API.changeSingleAccountStatus usr status) !>> accountStatusError @@ -701,7 +712,7 @@ getConnectionsStatus (ConnectionsStatusRequestV2 froms mtos mrel) = do where filterByRelation l rel = filter ((== rel) . csv2Status) l -revokeIdentityH :: Either Email Phone -> (Handler r) Response +revokeIdentityH :: (CallsFed 'Brig "on-user-deleted-connections") => Either Email Phone -> (Handler r) Response revokeIdentityH emailOrPhone = do lift $ API.revokeIdentity emailOrPhone pure $ setStatus status200 empty @@ -748,7 +759,7 @@ addPhonePrefixH (_ ::: req) = do void . lift $ API.phonePrefixInsert prefix pure empty -updateSSOIdH :: UserId ::: JSON ::: JsonRequest UserSSOId -> (Handler r) Response +updateSSOIdH :: (CallsFed 'Brig "on-user-deleted-connections") => UserId ::: JSON ::: JsonRequest UserSSOId -> (Handler r) Response updateSSOIdH (uid ::: _ ::: req) = do ssoid :: UserSSOId <- parseJsonBody req success <- lift $ wrapClient $ Data.updateSSOId uid (Just ssoid) @@ -758,7 +769,7 @@ updateSSOIdH (uid ::: _ ::: req) = do pure empty else pure . setStatus status404 $ plain "User does not exist or has no team." -deleteSSOIdH :: UserId ::: JSON -> (Handler r) Response +deleteSSOIdH :: (CallsFed 'Brig "on-user-deleted-connections") => UserId ::: JSON -> (Handler r) Response deleteSSOIdH (uid ::: _) = do success <- lift $ wrapClient $ Data.updateSSOId uid Nothing if success @@ -814,18 +825,18 @@ getRichInfoMulti :: [UserId] -> (Handler r) [(UserId, RichInfo)] getRichInfoMulti uids = lift (wrapClient $ API.lookupRichInfoMultiUsers uids) -updateHandleH :: UserId ::: JSON ::: JsonRequest HandleUpdate -> (Handler r) Response +updateHandleH :: (CallsFed 'Brig "on-user-deleted-connections") => UserId ::: JSON ::: JsonRequest HandleUpdate -> (Handler r) Response updateHandleH (uid ::: _ ::: body) = empty <$ (updateHandle uid =<< parseJsonBody body) -updateHandle :: UserId -> HandleUpdate -> (Handler r) () +updateHandle :: (CallsFed 'Brig "on-user-deleted-connections") => UserId -> HandleUpdate -> (Handler r) () updateHandle uid (HandleUpdate handleUpd) = do handle <- validateHandle handleUpd API.changeHandle uid Nothing handle API.AllowSCIMUpdates !>> changeHandleError -updateUserNameH :: UserId ::: JSON ::: JsonRequest NameUpdate -> (Handler r) Response +updateUserNameH :: (CallsFed 'Brig "on-user-deleted-connections") => UserId ::: JSON ::: JsonRequest NameUpdate -> (Handler r) Response updateUserNameH (uid ::: _ ::: body) = empty <$ (updateUserName uid =<< parseJsonBody body) -updateUserName :: UserId -> NameUpdate -> (Handler r) () +updateUserName :: (CallsFed 'Brig "on-user-deleted-connections") => UserId -> NameUpdate -> (Handler r) () updateUserName uid (NameUpdate nameUpd) = do name <- either (const $ throwStd (errorToWai @'E.InvalidUser)) pure $ mkName nameUpd let uu = diff --git a/services/brig/src/Brig/API/MLS/KeyPackages.hs b/services/brig/src/Brig/API/MLS/KeyPackages.hs index 74742fe176..63379c4de8 100644 --- a/services/brig/src/Brig/API/MLS/KeyPackages.hs +++ b/services/brig/src/Brig/API/MLS/KeyPackages.hs @@ -55,6 +55,7 @@ uploadKeyPackages lusr cid (kpuKeyPackages -> kps) = do lift . wrapClient $ Data.insertKeyPackages (tUnqualified lusr) cid kps' claimKeyPackages :: + CallsFed 'Brig "claim-key-packages" => Local UserId -> Qualified UserId -> Maybe ClientId -> @@ -96,6 +97,7 @@ claimLocalKeyPackages qusr skipOwn target = do <$> wrapClientM (Data.claimKeyPackage target c) claimRemoteKeyPackages :: + CallsFed 'Brig "claim-key-packages" => Local UserId -> Remote UserId -> Handler r KeyPackageBundle diff --git a/services/brig/src/Brig/API/OAuth.hs b/services/brig/src/Brig/API/OAuth.hs index 9be87e6ec0..5d1daa714b 100644 --- a/services/brig/src/Brig/API/OAuth.hs +++ b/services/brig/src/Brig/API/OAuth.hs @@ -104,26 +104,38 @@ createNewOAuthAuthCode uid (NewOAuthAuthCode cid scope responseType redirectUrl createAccessToken :: (Member Now r, Member Jwk r) => OAuthAccessTokenRequest -> (Handler r) OAuthAccessTokenResponse createAccessToken req = do unlessM (Opt.setOAuthEnabled <$> view settings) $ throwStd $ errorToWai @'OAuthFeatureDisabled - (authCodeCid, authCodeUserId, authCodeScopes, authCodeRedirectUrl) <- + (cid, uid, scope, uri) <- lift (wrapClient $ lookupAndDeleteOAuthAuthCode (oatCode req)) >>= maybe (throwStd $ errorToWai @'OAuthAuthCodeNotFound) pure - oauthClient <- getOAuthClient authCodeUserId (oatClientId req) >>= maybe (throwStd $ errorToWai @'OAuthClientNotFound) pure + oauthClient <- getOAuthClient uid (oatClientId req) >>= maybe (throwStd $ errorToWai @'OAuthClientNotFound) pure unlessM (verifyClientSecret (oatClientSecret req) (ocId oauthClient)) $ throwStd $ errorToWai @'InvalidClientCredentials - unless (authCodeCid == oatClientId req) $ throwStd $ errorToWai @'InvalidClientCredentials + unless (cid == oatClientId req) $ throwStd $ errorToWai @'InvalidClientCredentials unless (ocRedirectUrl oauthClient == oatRedirectUri req) $ throwStd $ errorToWai @'RedirectUrlMissMatch - unless (authCodeRedirectUrl == oatRedirectUri req) $ throwStd $ errorToWai @'RedirectUrlMissMatch + unless (uri == oatRedirectUri req) $ throwStd $ errorToWai @'RedirectUrlMissMatch - domain <- Opt.setFederationDomain <$> view settings exp <- fromIntegral . Opt.setOAuthAccessTokenExpirationTimeSecs <$> view settings - claims <- mkClaims authCodeUserId domain authCodeScopes exp fp <- view settings >>= maybe (throwStd $ errorToWai @'JwtError) pure . Opt.setOAuthJwkKeyPair key <- lift (liftSem $ Jwk.get fp) >>= maybe (throwStd $ errorToWai @'JwtError) pure - token <- OAuthAccessToken <$> signJwtToken key claims - pure $ OAuthAccessTokenResponse token OAuthAccessTokenTypeBearer exp + accessToken <- mkAccessToken key uid scope + refreshToken <- mkRefreshToken key + pure $ OAuthAccessTokenResponse accessToken OAuthAccessTokenTypeBearer exp refreshToken where - mkClaims :: (Member Now r) => UserId -> Domain -> OAuthScopes -> NominalDiffTime -> (Handler r) OAuthClaimSet - mkClaims u domain scopes ttl = do + mkRefreshToken :: (Member Now r) => JWK -> (Handler r) OAuthRefreshToken + mkRefreshToken key = do + sub <- maybe (throwStd $ errorToWai @'JwtError) pure $ ("c5c126ce-58b3-4391-aa19-c70f8759b623" :: Text) ^? stringOrUri + let claims = emptyClaimsSet & claimSub ?~ sub + OAuthToken <$> signRefreshToken key claims + + mkAccessToken :: (Member Now r, Member Jwk r) => JWK -> UserId -> OAuthScopes -> (Handler r) OAuthAccessToken + mkAccessToken key uid scope = do + domain <- Opt.setFederationDomain <$> view settings + exp <- fromIntegral . Opt.setOAuthAccessTokenExpirationTimeSecs <$> view settings + claims <- mkAccessTokenClaims uid domain scope exp + OAuthToken <$> signAccessToken key claims + + mkAccessTokenClaims :: (Member Now r) => UserId -> Domain -> OAuthScopes -> NominalDiffTime -> (Handler r) OAuthsClaimSet + mkAccessTokenClaims u domain scopes ttl = do iat <- lift (liftSem Now.get) uri <- maybe (throwStd $ errorToWai @'JwtError) pure $ domainText domain ^? stringOrUri sub <- maybe (throwStd $ errorToWai @'JwtError) pure $ idToText u ^? stringOrUri @@ -135,10 +147,10 @@ createAccessToken req = do & claimIat ?~ NumericDate iat & claimSub ?~ sub & claimExp ?~ NumericDate exp - pure $ OAuthClaimSet claimSet scopes + pure $ OAuthsClaimSet claimSet scopes - signJwtToken :: JWK -> OAuthClaimSet -> (Handler r) SignedJWT - signJwtToken key claims = do + signAccessToken :: JWK -> OAuthsClaimSet -> (Handler r) SignedJWT + signAccessToken key claims = do jwtOrError <- liftIO $ doSignClaims either (const $ throwStd $ errorToWai @'JwtError) pure jwtOrError where @@ -147,6 +159,16 @@ createAccessToken req = do algo <- bestJWSAlg key signJWT key (newJWSHeader ((), algo)) claims + signRefreshToken :: JWK -> ClaimsSet -> (Handler r) SignedJWT + signRefreshToken key claims = do + jwtOrError <- liftIO $ doSignClaims + either (const $ throwStd $ errorToWai @'JwtError) pure jwtOrError + where + doSignClaims :: IO (Either JWTError SignedJWT) + doSignClaims = runJOSE $ do + algo <- bestJWSAlg key + signClaims key (newJWSHeader ((), algo)) claims + verifyClientSecret :: OAuthClientPlainTextSecret -> OAuthClientId -> (Handler r) Bool verifyClientSecret secret cid = do let plainTextPw = PlainTextPassword $ toText $ unOAuthClientPlainTextSecret secret diff --git a/services/brig/src/Brig/API/Public.hs b/services/brig/src/Brig/API/Public.hs index ee4f09a9aa..f5fa87ef63 100644 --- a/services/brig/src/Brig/API/Public.hs +++ b/services/brig/src/Brig/API/Public.hs @@ -113,6 +113,7 @@ import Util.Logging (logFunction, logHandle, logTeam, logUser) import qualified Wire.API.Connection as Public import Wire.API.Error import qualified Wire.API.Error.Brig as E +import Wire.API.Federation.API import qualified Wire.API.Properties as Public import qualified Wire.API.Routes.MultiTablePaging as Public import Wire.API.Routes.Named (Named (Named)) @@ -122,6 +123,7 @@ import qualified Wire.API.Routes.Public.Cannon as CannonAPI import qualified Wire.API.Routes.Public.Cargohold as CargoholdAPI import qualified Wire.API.Routes.Public.Galley as GalleyAPI import qualified Wire.API.Routes.Public.Gundeck as GundeckAPI +import qualified Wire.API.Routes.Public.Proxy as ProxyAPI import qualified Wire.API.Routes.Public.Spar as SparAPI import qualified Wire.API.Routes.Public.Util as Public import Wire.API.Routes.Version @@ -160,6 +162,7 @@ versionedSwaggerDocsAPI (Just V3) = <> CargoholdAPI.swaggerDoc <> CannonAPI.swaggerDoc <> GundeckAPI.swaggerDoc + <> ProxyAPI.swaggerDoc ) & S.info . S.title .~ "Wire-Server API" & S.info . S.description ?~ $(embedText =<< makeRelativeToProject "docs/swagger.md") @@ -188,6 +191,7 @@ servantSitemap :: r => ServerT (BrigAPI :<|> OAuthAPI) (Handler r) servantSitemap = brigAPI :<|> oauthAPI +servantSitemap = where brigAPI :: ServerT BrigAPI (Handler r) brigAPI = @@ -206,37 +210,38 @@ servantSitemap = brigAPI :<|> oauthAPI :<|> callingAPI :<|> Team.servantAPI :<|> systemSettingsAPI + userAPI :: ServerT UserAPI (Handler r) userAPI = - Named @"get-user-unqualified" getUserUnqualifiedH - :<|> Named @"get-user-qualified" getUser + Named @"get-user-unqualified" (callsFed getUserUnqualifiedH) + :<|> Named @"get-user-qualified" (callsFed getUser) :<|> Named @"update-user-email" updateUserEmail - :<|> Named @"get-handle-info-unqualified" getHandleInfoUnqualifiedH - :<|> Named @"get-user-by-handle-qualified" Handle.getHandleInfo - :<|> Named @"list-users-by-unqualified-ids-or-handles" listUsersByUnqualifiedIdsOrHandles - :<|> Named @"list-users-by-ids-or-handles" listUsersByIdsOrHandles + :<|> Named @"get-handle-info-unqualified" (callsFed getHandleInfoUnqualifiedH) + :<|> Named @"get-user-by-handle-qualified" (callsFed Handle.getHandleInfo) + :<|> Named @"list-users-by-unqualified-ids-or-handles" (callsFed listUsersByUnqualifiedIdsOrHandles) + :<|> Named @"list-users-by-ids-or-handles" (callsFed listUsersByIdsOrHandles) :<|> Named @"send-verification-code" sendVerificationCode :<|> Named @"get-rich-info" getRichInfo selfAPI :: ServerT SelfAPI (Handler r) selfAPI = Named @"get-self" getSelf - :<|> Named @"delete-self" deleteSelfUser - :<|> Named @"put-self" updateUser + :<|> Named @"delete-self" (callsFed deleteSelfUser) + :<|> Named @"put-self" (callsFed updateUser) :<|> Named @"change-phone" changePhone - :<|> Named @"remove-phone" removePhone - :<|> Named @"remove-email" removeEmail + :<|> Named @"remove-phone" (callsFed removePhone) + :<|> Named @"remove-email" (callsFed removeEmail) :<|> Named @"check-password-exists" checkPasswordExists :<|> Named @"change-password" changePassword - :<|> Named @"change-locale" changeLocale - :<|> Named @"change-handle" changeHandle + :<|> Named @"change-locale" (callsFed changeLocale) + :<|> Named @"change-handle" (callsFed changeHandle) accountAPI :: ServerT AccountAPI (Handler r) accountAPI = - Named @"register" createUser - :<|> Named @"verify-delete" verifyDeleteUser - :<|> Named @"get-activate" activate - :<|> Named @"post-activate" activateKey + Named @"register" (callsFed createUser) + :<|> Named @"verify-delete" (callsFed verifyDeleteUser) + :<|> Named @"get-activate" (callsFed activate) + :<|> Named @"post-activate" (callsFed activateKey) :<|> Named @"post-activate-send" sendActivationCode :<|> Named @"post-password-reset" beginPasswordReset :<|> Named @"post-password-reset-complete" completePasswordReset @@ -245,26 +250,26 @@ servantSitemap = brigAPI :<|> oauthAPI clientAPI :: ServerT ClientAPI (Handler r) clientAPI = - Named @"get-user-clients-unqualified" getUserClientsUnqualified - :<|> Named @"get-user-clients-qualified" getUserClientsQualified - :<|> Named @"get-user-client-unqualified" getUserClientUnqualified - :<|> Named @"get-user-client-qualified" getUserClientQualified - :<|> Named @"list-clients-bulk" listClientsBulk - :<|> Named @"list-clients-bulk-v2" listClientsBulkV2 - :<|> Named @"list-clients-bulk@v2" listClientsBulkV2 + Named @"get-user-clients-unqualified" (callsFed getUserClientsUnqualified) + :<|> Named @"get-user-clients-qualified" (callsFed getUserClientsQualified) + :<|> Named @"get-user-client-unqualified" (callsFed getUserClientUnqualified) + :<|> Named @"get-user-client-qualified" (callsFed getUserClientQualified) + :<|> Named @"list-clients-bulk" (callsFed listClientsBulk) + :<|> Named @"list-clients-bulk-v2" (callsFed listClientsBulkV2) + :<|> Named @"list-clients-bulk@v2" (callsFed listClientsBulkV2) prekeyAPI :: ServerT PrekeyAPI (Handler r) prekeyAPI = - Named @"get-users-prekeys-client-unqualified" getPrekeyUnqualifiedH - :<|> Named @"get-users-prekeys-client-qualified" getPrekeyH - :<|> Named @"get-users-prekey-bundle-unqualified" getPrekeyBundleUnqualifiedH - :<|> Named @"get-users-prekey-bundle-qualified" getPrekeyBundleH + Named @"get-users-prekeys-client-unqualified" (callsFed getPrekeyUnqualifiedH) + :<|> Named @"get-users-prekeys-client-qualified" (callsFed getPrekeyH) + :<|> Named @"get-users-prekey-bundle-unqualified" (callsFed getPrekeyBundleUnqualifiedH) + :<|> Named @"get-users-prekey-bundle-qualified" (callsFed getPrekeyBundleH) :<|> Named @"get-multi-user-prekey-bundle-unqualified" getMultiUserPrekeyBundleUnqualifiedH - :<|> Named @"get-multi-user-prekey-bundle-qualified" getMultiUserPrekeyBundleH + :<|> Named @"get-multi-user-prekey-bundle-qualified" (callsFed getMultiUserPrekeyBundleH) userClientAPI :: ServerT UserClientAPI (Handler r) userClientAPI = - Named @"add-client" addClient + Named @"add-client" (callsFed addClient) :<|> Named @"update-client" updateClient :<|> Named @"delete-client" deleteClient :<|> Named @"list-clients" listClients @@ -277,15 +282,15 @@ servantSitemap = brigAPI :<|> oauthAPI connectionAPI :: ServerT ConnectionAPI (Handler r) connectionAPI = - Named @"create-connection-unqualified" createConnectionUnqualified - :<|> Named @"create-connection" createConnection + Named @"create-connection-unqualified" (callsFed createConnectionUnqualified) + :<|> Named @"create-connection" (callsFed createConnection) :<|> Named @"list-local-connections" listLocalConnections :<|> Named @"list-connections" listConnections :<|> Named @"get-connection-unqualified" getLocalConnection :<|> Named @"get-connection" getConnection - :<|> Named @"update-connection-unqualified" updateLocalConnection - :<|> Named @"update-connection" updateConnection - :<|> Named @"search-contacts" Search.search + :<|> Named @"update-connection-unqualified" (callsFed updateLocalConnection) + :<|> Named @"update-connection" (callsFed updateConnection) + :<|> Named @"search-contacts" (callsFed Search.search) propertiesAPI :: ServerT PropertiesAPI (Handler r) propertiesAPI = @@ -300,7 +305,7 @@ servantSitemap = brigAPI :<|> oauthAPI mlsAPI :: ServerT MLSAPI (Handler r) mlsAPI = Named @"mls-key-packages-upload" uploadKeyPackages - :<|> Named @"mls-key-packages-claim" claimKeyPackages + :<|> Named @"mls-key-packages-claim" (callsFed claimKeyPackages) :<|> Named @"mls-key-packages-count" countKeyPackages userHandleAPI :: ServerT UserHandleAPI (Handler r) @@ -314,9 +319,9 @@ servantSitemap = brigAPI :<|> oauthAPI authAPI :: ServerT AuthAPI (Handler r) authAPI = - Named @"access" accessH + Named @"access" (callsFed accessH) :<|> Named @"send-login-code" sendLoginCode - :<|> Named @"login" login + :<|> Named @"login" (callsFed login) :<|> Named @"logout" logoutH :<|> Named @"change-self-email" changeSelfEmailH :<|> Named @"list-cookies" listCookies @@ -432,22 +437,22 @@ listPropertyKeysAndValues u = do keysAndVals <- fmap Map.fromList . lift $ wrapClient (API.lookupPropertyKeysAndValues u) Public.PropertyKeysAndValues <$> traverse parseStoredPropertyValue keysAndVals -getPrekeyUnqualifiedH :: UserId -> UserId -> ClientId -> (Handler r) Public.ClientPrekey +getPrekeyUnqualifiedH :: (CallsFed 'Brig "claim-prekey") => UserId -> UserId -> ClientId -> (Handler r) Public.ClientPrekey getPrekeyUnqualifiedH zusr user client = do domain <- viewFederationDomain getPrekeyH zusr (Qualified user domain) client -getPrekeyH :: UserId -> Qualified UserId -> ClientId -> (Handler r) Public.ClientPrekey +getPrekeyH :: (CallsFed 'Brig "claim-prekey") => UserId -> Qualified UserId -> ClientId -> (Handler r) Public.ClientPrekey getPrekeyH zusr (Qualified user domain) client = do mPrekey <- API.claimPrekey (ProtectedUser zusr) user domain client !>> clientError ifNothing (notFound "prekey not found") mPrekey -getPrekeyBundleUnqualifiedH :: UserId -> UserId -> (Handler r) Public.PrekeyBundle +getPrekeyBundleUnqualifiedH :: (CallsFed 'Brig "claim-prekey-bundle") => UserId -> UserId -> (Handler r) Public.PrekeyBundle getPrekeyBundleUnqualifiedH zusr uid = do domain <- viewFederationDomain API.claimPrekeyBundle (ProtectedUser zusr) domain uid !>> clientError -getPrekeyBundleH :: UserId -> Qualified UserId -> (Handler r) Public.PrekeyBundle +getPrekeyBundleH :: (CallsFed 'Brig "claim-prekey-bundle") => UserId -> Qualified UserId -> (Handler r) Public.PrekeyBundle getPrekeyBundleH zusr (Qualified uid domain) = API.claimPrekeyBundle (ProtectedUser zusr) domain uid !>> clientError @@ -463,7 +468,7 @@ getMultiUserPrekeyBundleUnqualifiedH zusr userClients = do API.claimLocalMultiPrekeyBundles (ProtectedUser zusr) userClients !>> clientError getMultiUserPrekeyBundleH :: - Members '[Concurrency 'Unsafe] r => + (Members '[Concurrency 'Unsafe] r, CallsFed 'Brig "claim-multi-prekey-bundle") => UserId -> Public.QualifiedUserClients -> (Handler r) Public.QualifiedUserClientPrekeyMap @@ -478,10 +483,12 @@ getMultiUserPrekeyBundleH zusr qualUserClients = do API.claimMultiPrekeyBundles (ProtectedUser zusr) qualUserClients !>> clientError addClient :: - Members - '[ GalleyProvider - ] - r => + ( Members + '[ GalleyProvider + ] + r, + CallsFed 'Brig "on-user-deleted-connections" + ) => UserId -> ConnId -> Maybe IpAddr -> @@ -512,28 +519,28 @@ listClients zusr = getClient :: UserId -> ClientId -> (Handler r) (Maybe Public.Client) getClient zusr clientId = lift $ API.lookupLocalClient zusr clientId -getUserClientsUnqualified :: UserId -> (Handler r) [Public.PubClient] +getUserClientsUnqualified :: (CallsFed 'Brig "get-user-clients") => UserId -> (Handler r) [Public.PubClient] getUserClientsUnqualified uid = do localdomain <- viewFederationDomain API.lookupPubClients (Qualified uid localdomain) !>> clientError -getUserClientsQualified :: Qualified UserId -> (Handler r) [Public.PubClient] +getUserClientsQualified :: (CallsFed 'Brig "get-user-clients") => Qualified UserId -> (Handler r) [Public.PubClient] getUserClientsQualified quid = API.lookupPubClients quid !>> clientError -getUserClientUnqualified :: UserId -> ClientId -> (Handler r) Public.PubClient +getUserClientUnqualified :: (CallsFed 'Brig "get-user-clients") => UserId -> ClientId -> (Handler r) Public.PubClient getUserClientUnqualified uid cid = do localdomain <- viewFederationDomain x <- API.lookupPubClient (Qualified uid localdomain) cid !>> clientError ifNothing (notFound "client not found") x -listClientsBulk :: UserId -> Range 1 MaxUsersForListClientsBulk [Qualified UserId] -> (Handler r) (Public.QualifiedUserMap (Set Public.PubClient)) +listClientsBulk :: (CallsFed 'Brig "get-user-clients") => UserId -> Range 1 MaxUsersForListClientsBulk [Qualified UserId] -> (Handler r) (Public.QualifiedUserMap (Set Public.PubClient)) listClientsBulk _zusr limitedUids = API.lookupPubClientsBulk (fromRange limitedUids) !>> clientError -listClientsBulkV2 :: UserId -> Public.LimitedQualifiedUserIdList MaxUsersForListClientsBulk -> (Handler r) (Public.WrappedQualifiedUserMap (Set Public.PubClient)) +listClientsBulkV2 :: (CallsFed 'Brig "get-user-clients") => UserId -> Public.LimitedQualifiedUserIdList MaxUsersForListClientsBulk -> (Handler r) (Public.WrappedQualifiedUserMap (Set Public.PubClient)) listClientsBulkV2 zusr userIds = Public.Wrapped <$> listClientsBulk zusr (Public.qualifiedUsers userIds) -getUserClientQualified :: Qualified UserId -> ClientId -> (Handler r) Public.PubClient +getUserClientQualified :: (CallsFed 'Brig "get-user-clients") => Qualified UserId -> ClientId -> (Handler r) Public.PubClient getUserClientQualified quid cid = do x <- API.lookupPubClient quid cid !>> clientError ifNothing (notFound "client not found") x @@ -589,12 +596,14 @@ createAccessToken method uid cid proof = do -- | docs/reference/user/registration.md {#RefRegistration} createUser :: - Members - '[ BlacklistStore, - GalleyProvider, - UserPendingActivationStore p - ] - r => + ( Members + '[ BlacklistStore, + GalleyProvider, + UserPendingActivationStore p + ] + r, + CallsFed 'Brig "on-user-deleted-connections" + ) => Public.NewUserPublic -> (Handler r) (Either Public.RegisterError Public.RegisterSuccess) createUser (Public.NewUserPublic new) = lift . runExceptT $ do @@ -671,10 +680,12 @@ getSelf self = >>= ifNothing (errorToWai @'E.UserNotFound) getUserUnqualifiedH :: - Members - '[ GalleyProvider - ] - r => + ( Members + '[ GalleyProvider + ] + r, + CallsFed 'Brig "get-users-by-ids" + ) => UserId -> UserId -> (Handler r) (Maybe Public.UserProfile) @@ -683,10 +694,12 @@ getUserUnqualifiedH self uid = do getUser self (Qualified uid domain) getUser :: - Members - '[ GalleyProvider - ] - r => + ( Members + '[ GalleyProvider + ] + r, + CallsFed 'Brig "get-users-by-ids" + ) => UserId -> Qualified UserId -> (Handler r) (Maybe Public.UserProfile) @@ -696,11 +709,13 @@ getUser self qualifiedUserId = do -- FUTUREWORK: Make servant understand that at least one of these is required listUsersByUnqualifiedIdsOrHandles :: - Members - '[ GalleyProvider, - Concurrency 'Unsafe - ] - r => + ( Members + '[ GalleyProvider, + Concurrency 'Unsafe + ] + r, + CallsFed 'Brig "get-users-by-ids" + ) => UserId -> Maybe (CommaSeparatedList UserId) -> Maybe (Range 1 4 (CommaSeparatedList Handle)) -> @@ -722,11 +737,13 @@ listUsersByUnqualifiedIdsOrHandles self mUids mHandles = do listUsersByIdsOrHandles :: forall r. - Members - '[ GalleyProvider, - Concurrency 'Unsafe - ] - r => + ( Members + '[ GalleyProvider, + Concurrency 'Unsafe + ] + r, + CallsFed 'Brig "get-users-by-ids" + ) => UserId -> Public.ListUsersQuery -> (Handler r) [Public.UserProfile] @@ -757,7 +774,7 @@ newtype GetActivationCodeResp instance ToJSON GetActivationCodeResp where toJSON (GetActivationCodeResp (k, c)) = object ["key" .= k, "code" .= c] -updateUser :: UserId -> ConnId -> Public.UserUpdate -> (Handler r) (Maybe Public.UpdateProfileError) +updateUser :: (CallsFed 'Brig "on-user-deleted-connections") => UserId -> ConnId -> Public.UserUpdate -> (Handler r) (Maybe Public.UpdateProfileError) updateUser uid conn uu = do eithErr <- lift $ runExceptT $ API.updateUser uid (Just conn) uu API.ForbidSCIMUpdates pure $ either Just (const Nothing) eithErr @@ -778,11 +795,11 @@ changePhone u _ (Public.puPhone -> phone) = lift . exceptTToMaybe $ do let apair = (activationKey adata, activationCode adata) lift . wrapClient $ sendActivationSms pn apair loc -removePhone :: UserId -> ConnId -> (Handler r) (Maybe Public.RemoveIdentityError) +removePhone :: (CallsFed 'Brig "on-user-deleted-connections") => UserId -> ConnId -> (Handler r) (Maybe Public.RemoveIdentityError) removePhone self conn = lift . exceptTToMaybe $ API.removePhone self conn -removeEmail :: UserId -> ConnId -> (Handler r) (Maybe Public.RemoveIdentityError) +removeEmail :: (CallsFed 'Brig "on-user-deleted-connections") => UserId -> ConnId -> (Handler r) (Maybe Public.RemoveIdentityError) removeEmail self conn = lift . exceptTToMaybe $ API.removeEmail self conn @@ -792,7 +809,7 @@ checkPasswordExists = fmap isJust . lift . wrapClient . API.lookupPassword changePassword :: UserId -> Public.PasswordChange -> (Handler r) (Maybe Public.ChangePasswordError) changePassword u cp = lift . exceptTToMaybe $ API.changePassword u cp -changeLocale :: UserId -> ConnId -> Public.LocaleUpdate -> (Handler r) () +changeLocale :: (CallsFed 'Brig "on-user-deleted-connections") => UserId -> ConnId -> Public.LocaleUpdate -> (Handler r) () changeLocale u conn l = lift $ API.changeLocale u conn l -- | (zusr is ignored by this handler, ie. checking handles is allowed as long as you have @@ -816,10 +833,13 @@ checkHandles _ (Public.CheckHandles hs num) = do -- 'Handle.getHandleInfo') returns UserProfile to reduce traffic between backends -- in a federated scenario. getHandleInfoUnqualifiedH :: - Members - '[ GalleyProvider - ] - r => + ( Members + '[ GalleyProvider + ] + r, + CallsFed 'Brig "get-user-by-handle", + CallsFed 'Brig "get-users-by-ids" + ) => UserId -> Handle -> (Handler r) (Maybe Public.UserHandleInfo) @@ -828,7 +848,7 @@ getHandleInfoUnqualifiedH self handle = do Public.UserHandleInfo . Public.profileQualifiedId <$$> Handle.getHandleInfo self (Qualified handle domain) -changeHandle :: UserId -> ConnId -> Public.HandleUpdate -> (Handler r) (Maybe Public.ChangeHandleError) +changeHandle :: (CallsFed 'Brig "on-user-deleted-connections") => UserId -> ConnId -> Public.HandleUpdate -> (Handler r) (Maybe Public.ChangeHandleError) changeHandle u conn (Public.HandleUpdate h) = lift . exceptTToMaybe $ do handle <- maybe (throwError Public.ChangeHandleInvalid) pure $ parseHandle h API.changeHandle u (Just conn) handle API.ForbidSCIMUpdates @@ -885,10 +905,12 @@ customerExtensionCheckBlockedDomains email = do customerExtensionBlockedDomain domain createConnectionUnqualified :: - Members - '[ GalleyProvider - ] - r => + ( Members + '[ GalleyProvider + ] + r, + CallsFed 'Brig "send-connection-action" + ) => UserId -> ConnId -> Public.ConnectionRequest -> @@ -899,10 +921,12 @@ createConnectionUnqualified self conn cr = do API.createConnection lself conn (tUntagged target) !>> connError createConnection :: - Members - '[ GalleyProvider - ] - r => + ( Members + '[ GalleyProvider + ] + r, + CallsFed 'Brig "send-connection-action" + ) => UserId -> ConnId -> Qualified UserId -> @@ -911,12 +935,12 @@ createConnection self conn target = do lself <- qualifyLocal self API.createConnection lself conn target !>> connError -updateLocalConnection :: UserId -> ConnId -> UserId -> Public.ConnectionUpdate -> (Handler r) (Public.UpdateResult Public.UserConnection) +updateLocalConnection :: (CallsFed 'Brig "send-connection-action") => UserId -> ConnId -> UserId -> Public.ConnectionUpdate -> (Handler r) (Public.UpdateResult Public.UserConnection) updateLocalConnection self conn other update = do lother <- qualifyLocal other updateConnection self conn (tUntagged lother) update -updateConnection :: UserId -> ConnId -> Qualified UserId -> Public.ConnectionUpdate -> (Handler r) (Public.UpdateResult Public.UserConnection) +updateConnection :: (CallsFed 'Brig "send-connection-action") => UserId -> ConnId -> Qualified UserId -> Public.ConnectionUpdate -> (Handler r) (Public.UpdateResult Public.UserConnection) updateConnection self conn other update = do let newStatus = Public.cuStatus update lself <- qualifyLocal self @@ -982,17 +1006,19 @@ getConnection self other = do lift . wrapClient $ Data.lookupConnection lself other deleteSelfUser :: - Members - '[ GalleyProvider - ] - r => + ( Members + '[ GalleyProvider + ] + r, + CallsFed 'Brig "on-user-deleted-connections" + ) => UserId -> Public.DeleteUser -> (Handler r) (Maybe Code.Timeout) deleteSelfUser u body = API.deleteSelfUser u (Public.deleteUserPassword body) !>> deleteUserError -verifyDeleteUser :: Public.VerifyDeleteUser -> Handler r () +verifyDeleteUser :: (CallsFed 'Brig "on-user-deleted-connections") => Public.VerifyDeleteUser -> Handler r () verifyDeleteUser body = API.verifyDeleteUser body !>> deleteUserError updateUserEmail :: @@ -1029,10 +1055,12 @@ updateUserEmail zuserId emailOwnerId (Public.EmailUpdate email) = do -- activation activate :: - Members - '[ GalleyProvider - ] - r => + ( Members + '[ GalleyProvider + ] + r, + CallsFed 'Brig "on-user-deleted-connections" + ) => Public.ActivationKey -> Public.ActivationCode -> (Handler r) ActivationRespWithStatus @@ -1042,10 +1070,12 @@ activate k c = do -- docs/reference/user/activation.md {#RefActivationSubmit} activateKey :: - Members - '[ GalleyProvider - ] - r => + ( Members + '[ GalleyProvider + ] + r, + CallsFed 'Brig "on-user-deleted-connections" + ) => Public.Activate -> (Handler r) ActivationRespWithStatus activateKey (Public.Activate tgt code dryrun) diff --git a/services/brig/src/Brig/API/User.hs b/services/brig/src/Brig/API/User.hs index dc65ae341d..a58f31f713 100644 --- a/services/brig/src/Brig/API/User.hs +++ b/services/brig/src/Brig/API/User.hs @@ -172,6 +172,7 @@ import UnliftIO.Async import Wire.API.Connection import Wire.API.Error import qualified Wire.API.Error.Brig as E +import Wire.API.Federation.API import Wire.API.Federation.Error import Wire.API.Routes.Internal.Brig.Connection import Wire.API.Team hiding (newTeam) @@ -227,10 +228,12 @@ verifyUniquenessAndCheckBlacklist uk = do createUserSpar :: forall r. - Members - '[ GalleyProvider - ] - r => + ( Members + '[ GalleyProvider + ] + r, + CallsFed 'Brig "on-user-deleted-connections" + ) => NewUserSpar -> ExceptT CreateUserSparError (AppT r) CreateUserResult createUserSpar new = do @@ -293,12 +296,14 @@ createUserSpar new = do -- docs/reference/user/registration.md {#RefRegistration} createUser :: forall r p. - Members - '[ BlacklistStore, - GalleyProvider, - UserPendingActivationStore p - ] - r => + ( Members + '[ BlacklistStore, + GalleyProvider, + UserPendingActivationStore p + ] + r, + CallsFed 'Brig "on-user-deleted-connections" + ) => NewUser -> ExceptT RegisterError (AppT r) CreateUserResult createUser new = do @@ -582,7 +587,7 @@ checkRestrictedUserCreation new = do ------------------------------------------------------------------------------- -- Update Profile -updateUser :: UserId -> Maybe ConnId -> UserUpdate -> AllowSCIMUpdates -> ExceptT UpdateProfileError (AppT r) () +updateUser :: CallsFed 'Brig "on-user-deleted-connections" => UserId -> Maybe ConnId -> UserUpdate -> AllowSCIMUpdates -> ExceptT UpdateProfileError (AppT r) () updateUser uid mconn uu allowScim = do for_ (uupName uu) $ \newName -> do mbUser <- lift . wrapClient $ Data.lookupUser WithPendingInvitations uid @@ -600,7 +605,7 @@ updateUser uid mconn uu allowScim = do ------------------------------------------------------------------------------- -- Update Locale -changeLocale :: UserId -> ConnId -> LocaleUpdate -> (AppT r) () +changeLocale :: CallsFed 'Brig "on-user-deleted-connections" => UserId -> ConnId -> LocaleUpdate -> (AppT r) () changeLocale uid conn (LocaleUpdate loc) = do wrapClient $ Data.updateLocale uid loc wrapHttpClient $ Intra.onUserEvent uid (Just conn) (localeUpdate uid loc) @@ -608,7 +613,7 @@ changeLocale uid conn (LocaleUpdate loc) = do ------------------------------------------------------------------------------- -- Update ManagedBy -changeManagedBy :: UserId -> ConnId -> ManagedByUpdate -> (AppT r) () +changeManagedBy :: CallsFed 'Brig "on-user-deleted-connections" => UserId -> ConnId -> ManagedByUpdate -> (AppT r) () changeManagedBy uid conn (ManagedByUpdate mb) = do wrapClient $ Data.updateManagedBy uid mb wrapHttpClient $ Intra.onUserEvent uid (Just conn) (managedByUpdate uid mb) @@ -616,7 +621,7 @@ changeManagedBy uid conn (ManagedByUpdate mb) = do -------------------------------------------------------------------------------- -- Change Handle -changeHandle :: UserId -> Maybe ConnId -> Handle -> AllowSCIMUpdates -> ExceptT ChangeHandleError (AppT r) () +changeHandle :: CallsFed 'Brig "on-user-deleted-connections" => UserId -> Maybe ConnId -> Handle -> AllowSCIMUpdates -> ExceptT ChangeHandleError (AppT r) () changeHandle uid mconn hdl allowScim = do when (isBlacklistedHandle hdl) $ throwE ChangeHandleInvalid @@ -774,7 +779,7 @@ changePhone u phone = do ------------------------------------------------------------------------------- -- Remove Email -removeEmail :: UserId -> ConnId -> ExceptT RemoveIdentityError (AppT r) () +removeEmail :: CallsFed 'Brig "on-user-deleted-connections" => UserId -> ConnId -> ExceptT RemoveIdentityError (AppT r) () removeEmail uid conn = do ident <- lift $ fetchUserIdentity uid case ident of @@ -788,7 +793,7 @@ removeEmail uid conn = do ------------------------------------------------------------------------------- -- Remove Phone -removePhone :: UserId -> ConnId -> ExceptT RemoveIdentityError (AppT r) () +removePhone :: CallsFed 'Brig "on-user-deleted-connections" => UserId -> ConnId -> ExceptT RemoveIdentityError (AppT r) () removePhone uid conn = do ident <- lift $ fetchUserIdentity uid case ident of @@ -806,7 +811,7 @@ removePhone uid conn = do ------------------------------------------------------------------------------- -- Forcefully revoke a verified identity -revokeIdentity :: Either Email Phone -> AppT r () +revokeIdentity :: CallsFed 'Brig "on-user-deleted-connections" => Either Email Phone -> AppT r () revokeIdentity key = do let uk = either userEmailKey userPhoneKey key mu <- wrapClient $ Data.lookupKey uk @@ -850,7 +855,8 @@ changeAccountStatus :: MonadMask m, MonadHttp m, HasRequestId m, - MonadUnliftIO m + MonadUnliftIO m, + CallsFed 'Brig "on-user-deleted-connections" ) => List1 UserId -> AccountStatus -> @@ -876,7 +882,8 @@ changeSingleAccountStatus :: MonadMask m, MonadHttp m, HasRequestId m, - MonadUnliftIO m + MonadUnliftIO m, + CallsFed 'Brig "on-user-deleted-connections" ) => UserId -> AccountStatus -> @@ -901,7 +908,7 @@ mkUserEvent usrs status = -- Activation activate :: - Members '[GalleyProvider] r => + (Members '[GalleyProvider] r, CallsFed 'Brig "on-user-deleted-connections") => ActivationTarget -> ActivationCode -> -- | The user for whom to activate the key. @@ -910,7 +917,7 @@ activate :: activate tgt code usr = activateWithCurrency tgt code usr Nothing activateWithCurrency :: - Members '[GalleyProvider] r => + (Members '[GalleyProvider] r, CallsFed 'Brig "on-user-deleted-connections") => ActivationTarget -> ActivationCode -> -- | The user for whom to activate the key. @@ -941,7 +948,8 @@ activateWithCurrency tgt code usr cur = do preverify :: ( MonadClient m, - MonadReader Env m + MonadReader Env m, + CallsFed 'Brig "on-user-deleted-connections" ) => ActivationTarget -> ActivationCode -> @@ -950,7 +958,7 @@ preverify tgt code = do key <- mkActivationKey tgt void $ Data.verifyCode key code -onActivated :: ActivationEvent -> (AppT r) (UserId, Maybe UserIdentity, Bool) +onActivated :: CallsFed 'Brig "on-user-deleted-connections" => ActivationEvent -> (AppT r) (UserId, Maybe UserIdentity, Bool) onActivated (AccountActivated account) = do let uid = userId (accountUser account) Log.debug $ field "user" (toByteString uid) . field "action" (Log.val "User.onActivated") @@ -1167,10 +1175,12 @@ mkPasswordResetKey ident = case ident of -- TODO: communicate deletions of SSO users to SSO service. deleteSelfUser :: forall r. - Members - '[ GalleyProvider - ] - r => + ( Members + '[ GalleyProvider + ] + r, + CallsFed 'Brig "on-user-deleted-connections" + ) => UserId -> Maybe PlainTextPassword -> ExceptT DeleteUserError (AppT r) (Maybe Timeout) @@ -1246,7 +1256,7 @@ deleteSelfUser uid pwd = do -- | Conclude validation and scheduling of user's deletion request that was initiated in -- 'deleteUser'. Called via @post /delete@. -verifyDeleteUser :: VerifyDeleteUser -> ExceptT DeleteUserError (AppT r) () +verifyDeleteUser :: CallsFed 'Brig "on-user-deleted-connections" => VerifyDeleteUser -> ExceptT DeleteUserError (AppT r) () verifyDeleteUser d = do let key = verifyDeleteUserKey d let code = verifyDeleteUserCode d @@ -1270,7 +1280,8 @@ ensureAccountDeleted :: HasRequestId m, MonadUnliftIO m, MonadClient m, - MonadReader Env m + MonadReader Env m, + CallsFed 'Brig "on-user-deleted-connections" ) => UserId -> m DeleteUserResult @@ -1315,7 +1326,8 @@ deleteAccount :: MonadHttp m, HasRequestId m, MonadUnliftIO m, - MonadClient m + MonadClient m, + CallsFed 'Brig "on-user-deleted-connections" ) => UserAccount -> m () @@ -1422,7 +1434,7 @@ userGC u = case userExpire u of pure u lookupProfile :: - Members '[GalleyProvider] r => + (Members '[GalleyProvider] r, CallsFed 'Brig "get-users-by-ids") => Local UserId -> Qualified UserId -> ExceptT FederationError (AppT r) (Maybe UserProfile) @@ -1438,11 +1450,13 @@ lookupProfile self other = -- Otherwise only the 'PublicProfile' is accessible for user 'self'. -- If 'self' is an unknown 'UserId', return '[]'. lookupProfiles :: - Members - '[ GalleyProvider, - Concurrency 'Unsafe - ] - r => + ( Members + '[ GalleyProvider, + Concurrency 'Unsafe + ] + r, + CallsFed 'Brig "get-users-by-ids" + ) => -- | User 'self' on whose behalf the profiles are requested. Local UserId -> -- | The users ('others') for which to obtain the profiles. @@ -1455,7 +1469,7 @@ lookupProfiles self others = (bucketQualified others) lookupProfilesFromDomain :: - Members '[GalleyProvider] r => + (Members '[GalleyProvider] r, CallsFed 'Brig "get-users-by-ids") => Local UserId -> Qualified [UserId] -> ExceptT FederationError (AppT r) [UserProfile] @@ -1468,7 +1482,8 @@ lookupProfilesFromDomain self = lookupRemoteProfiles :: ( MonadIO m, MonadReader Env m, - MonadLogger m + MonadLogger m, + CallsFed 'Brig "get-users-by-ids" ) => Remote [UserId] -> ExceptT FederationError m [UserProfile] diff --git a/services/brig/src/Brig/AWS.hs b/services/brig/src/Brig/AWS.hs index 1f9a88dbe1..1faee37564 100644 --- a/services/brig/src/Brig/AWS.hs +++ b/services/brig/src/Brig/AWS.hs @@ -47,6 +47,7 @@ where import Amazonka (AWSRequest, AWSResponse) import qualified Amazonka as AWS +import qualified Amazonka.Data.Text as AWS import qualified Amazonka.DynamoDB as DDB import qualified Amazonka.SES as SES import qualified Amazonka.SES.Lens as SES @@ -122,13 +123,13 @@ mkEnv lgr opts emailOpts mgr = do mkAwsEnv g ses dyn sqs = do baseEnv <- AWS.newEnv AWS.discover - <&> maybe id AWS.configure ses - <&> maybe id AWS.configure dyn - <&> AWS.configure sqs + <&> maybe id AWS.configureService ses + <&> maybe id AWS.configureService dyn + <&> AWS.configureService sqs pure $ baseEnv - { AWS.envLogger = awsLogger g, - AWS.envManager = mgr + { AWS.logger = awsLogger g, + AWS.manager = mgr } awsLogger g l = Logger.log g (mapLevel l) . Logger.msg . toLazyByteString mapLevel AWS.Info = Logger.Info @@ -226,10 +227,10 @@ sendMail m = do -- after the fact. AWS.ServiceError se | se - ^. AWS.serviceStatus + ^. AWS.serviceError_status == status400 && "Invalid domain name" - `Text.isPrefixOf` AWS.toText (se ^. AWS.serviceCode) -> + `Text.isPrefixOf` AWS.toText (se ^. AWS.serviceError_code) -> throwM SESInvalidDomain _ -> throwM (GeneralError x) @@ -268,7 +269,7 @@ canRetry :: MonadIO m => Either AWS.Error a -> m Bool canRetry (Right _) = pure False canRetry (Left e) = case e of AWS.TransportError (HttpExceptionRequest _ ResponseTimeout) -> pure True - AWS.ServiceError se | se ^. AWS.serviceCode == AWS.ErrorCode "RequestThrottled" -> pure True + AWS.ServiceError se | se ^. AWS.serviceError_code == AWS.ErrorCode "RequestThrottled" -> pure True _ -> pure False retry5x :: (Monad m) => RetryPolicyM m diff --git a/services/brig/src/Brig/Code.hs b/services/brig/src/Brig/Code.hs index 695ba1952c..42ca782a80 100644 --- a/services/brig/src/Brig/Code.hs +++ b/services/brig/src/Brig/Code.hs @@ -332,7 +332,7 @@ verify :: MonadClient m => Key -> Scope -> Value -> m (Maybe Code) verify k s v = lookup k s >>= maybe (pure Nothing) continue where continue c - | codeValue c == v = pure (Just c) + | codeValue c == v && codeRetries c > 0 = pure (Just c) | codeRetries c > 0 = do insertInternal (c {codeRetries = codeRetries c - 1}) pure Nothing diff --git a/services/brig/src/Brig/Data/Client.hs b/services/brig/src/Brig/Data/Client.hs index 7d4587a164..5e532d974e 100644 --- a/services/brig/src/Brig/Data/Client.hs +++ b/services/brig/src/Brig/Data/Client.hs @@ -49,6 +49,7 @@ module Brig.Data.Client where import qualified Amazonka as AWS +import qualified Amazonka.Data.Text as AWS import qualified Amazonka.DynamoDB as AWS import qualified Amazonka.DynamoDB.Lens as AWS import Bilge.Retry (httpHandlers) @@ -567,7 +568,7 @@ withOptLock u c ma = go (10 :: Int) run = execCatch e cmd >>= either handleErr (pure . conv) handlers = httpHandlers ++ [const $ EL.handler_ AWS._ConditionalCheckFailedException (pure True)] policy = limitRetries 3 <> exponentialBackoff 100000 - handleErr (AWS.ServiceError se) | se ^. AWS.serviceCode == AWS.ErrorCode "ProvisionedThroughputExceeded" = do + handleErr (AWS.ServiceError se) | se ^. AWS.serviceError_code == AWS.ErrorCode "ProvisionedThroughputExceeded" = do Metrics.counterIncr (Metrics.path "client.opt_lock.provisioned_throughput_exceeded") m pure Nothing handleErr _ = pure Nothing diff --git a/services/brig/src/Brig/Federation/Client.hs b/services/brig/src/Brig/Federation/Client.hs index 1b38057912..37eb4924ba 100644 --- a/services/brig/src/Brig/Federation/Client.hs +++ b/services/brig/src/Brig/Federation/Client.hs @@ -47,6 +47,7 @@ import Wire.API.UserMap getUserHandleInfo :: ( MonadReader Env m, MonadIO m, + CallsFed 'Brig "get-user-by-handle", Log.MonadLogger m ) => Remote Handle -> @@ -58,6 +59,7 @@ getUserHandleInfo (tUntagged -> Qualified handle domain) = do getUsersByIds :: ( MonadReader Env m, MonadIO m, + CallsFed 'Brig "get-users-by-ids", Log.MonadLogger m ) => Domain -> @@ -68,7 +70,7 @@ getUsersByIds domain uids = do runBrigFederatorClient domain $ fedClient @'Brig @"get-users-by-ids" uids claimPrekey :: - (MonadReader Env m, MonadIO m, Log.MonadLogger m) => + (MonadReader Env m, MonadIO m, Log.MonadLogger m, CallsFed 'Brig "claim-prekey") => Qualified UserId -> ClientId -> ExceptT FederationError m (Maybe ClientPrekey) @@ -79,6 +81,7 @@ claimPrekey (Qualified user domain) client = do claimPrekeyBundle :: ( MonadReader Env m, MonadIO m, + CallsFed 'Brig "claim-prekey-bundle", Log.MonadLogger m ) => Qualified UserId -> @@ -90,7 +93,8 @@ claimPrekeyBundle (Qualified user domain) = do claimMultiPrekeyBundle :: ( Log.MonadLogger m, MonadReader Env m, - MonadIO m + MonadIO m, + CallsFed 'Brig "claim-multi-prekey-bundle" ) => Domain -> UserClients -> @@ -102,7 +106,8 @@ claimMultiPrekeyBundle domain uc = do searchUsers :: ( MonadReader Env m, MonadIO m, - Log.MonadLogger m + Log.MonadLogger m, + CallsFed 'Brig "search-users" ) => Domain -> SearchRequest -> @@ -114,7 +119,8 @@ searchUsers domain searchTerm = do getUserClients :: ( MonadReader Env m, MonadIO m, - Log.MonadLogger m + Log.MonadLogger m, + CallsFed 'Brig "get-user-clients" ) => Domain -> GetUserClients -> @@ -124,7 +130,7 @@ getUserClients domain guc = do runBrigFederatorClient domain $ fedClient @'Brig @"get-user-clients" guc sendConnectionAction :: - (MonadReader Env m, MonadIO m, Log.MonadLogger m) => + (MonadReader Env m, MonadIO m, Log.MonadLogger m, CallsFed 'Brig "send-connection-action") => Local UserId -> Remote UserId -> RemoteConnectionAction -> diff --git a/services/brig/src/Brig/IO/Intra.hs b/services/brig/src/Brig/IO/Intra.hs index 3dae8bbbe1..922ef8b67d 100644 --- a/services/brig/src/Brig/IO/Intra.hs +++ b/services/brig/src/Brig/IO/Intra.hs @@ -98,6 +98,7 @@ import qualified System.Logger.Extended as ExLog import Wire.API.Connection import Wire.API.Conversation import Wire.API.Event.Conversation (Connect (Connect)) +import Wire.API.Federation.API import Wire.API.Federation.API.Brig import Wire.API.Federation.Error import Wire.API.Properties @@ -117,7 +118,8 @@ onUserEvent :: MonadHttp m, HasRequestId m, MonadUnliftIO m, - MonadClient m + MonadClient m, + CallsFed 'Brig "on-user-deleted-connections" ) => UserId -> Maybe ConnId -> @@ -249,7 +251,8 @@ dispatchNotifications :: MonadHttp m, HasRequestId m, MonadUnliftIO m, - MonadClient m + MonadClient m, + CallsFed 'Brig "on-user-deleted-connections" ) => UserId -> Maybe ConnId -> @@ -285,6 +288,7 @@ notifyUserDeletionLocals :: MonadHttp m, HasRequestId m, MonadUnliftIO m, + CallsFed 'Brig "on-user-deleted-connections", MonadClient m ) => UserId -> @@ -299,7 +303,8 @@ notifyUserDeletionRemotes :: forall m. ( MonadReader Env m, MonadClient m, - MonadLogger m + MonadLogger m, + CallsFed 'Brig "on-user-deleted-connections" ) => UserId -> m () diff --git a/services/brig/src/Brig/InternalEvent/Process.hs b/services/brig/src/Brig/InternalEvent/Process.hs index 7a05784e40..31bbf7076a 100644 --- a/services/brig/src/Brig/InternalEvent/Process.hs +++ b/services/brig/src/Brig/InternalEvent/Process.hs @@ -39,6 +39,7 @@ import Imports import System.Logger.Class (field, msg, val, (~~)) import qualified System.Logger.Class as Log import UnliftIO (timeout) +import Wire.API.Federation.API -- | Handle an internal event. -- @@ -52,7 +53,8 @@ onEvent :: MonadHttp m, HasRequestId m, MonadUnliftIO m, - MonadClient m + MonadClient m, + CallsFed 'Brig "on-user-deleted-connections" ) => InternalNotification -> m () diff --git a/services/brig/src/Brig/Options.hs b/services/brig/src/Brig/Options.hs index 0cceca668e..f13468db08 100644 --- a/services/brig/src/Brig/Options.hs +++ b/services/brig/src/Brig/Options.hs @@ -52,6 +52,7 @@ import Imports import qualified Network.DNS as DNS import System.Logger.Extended (Level, LogFormat) import Util.Options +import Wire.API.Routes.Version import qualified Wire.API.Team.Feature as Public import Wire.API.User import Wire.API.User.Search (FederatedUserSearchPolicy) @@ -587,8 +588,10 @@ data Settings = Settings setSftListAllServers :: Maybe ListAllSFTServers, setEnableMLS :: Maybe Bool, setKeyPackageMaximumLifetime :: Maybe NominalDiffTime, - -- | When set, development API versions are advertised to clients. + -- | When set, development API versions are advertised to clients as supported. setEnableDevelopmentVersions :: Maybe Bool, + -- | Disabled versions are not advertised and are completely disabled. + setDisabledAPIVersions :: Maybe (Set Version), -- | Minimum delay in seconds between consecutive attempts to generate a new verification code. -- use `set2FACodeGenerationDelaySecs` as the getter function which always provides a default value set2FACodeGenerationDelaySecsInternal :: !(Maybe Int), @@ -895,6 +898,7 @@ Lens.makeLensesFor ("setOAuthEnabledInternal", "oauthEnabledInternal"), ("setOAuthAuthCodeExpirationTimeSecsInternal", "oauthAuthCodeExpirationTimeSecsInternal"), ("setOAuthAccessTokenExpirationTimeSecsInternal", "oauthAccessTokenExpirationTimeSecsInternal") + ("setDisabledAPIVersions", "disabledAPIVersions") ] ''Settings diff --git a/services/brig/src/Brig/Run.hs b/services/brig/src/Brig/Run.hs index 9eba2ffd62..156e1e4036 100644 --- a/services/brig/src/Brig/Run.hs +++ b/services/brig/src/Brig/Run.hs @@ -76,6 +76,7 @@ import qualified Servant import System.Logger (msg, val, (.=), (~~)) import System.Logger.Class (MonadLogger, err) import Util.Options +import Wire.API.Federation.API import Wire.API.Routes.API import Wire.API.Routes.Internal.Brig.OAuth import Wire.API.Routes.Public.Brig @@ -96,7 +97,8 @@ run o = do Async.async $ runBrigToIO e $ wrapHttpClient $ - Queue.listen (e ^. internalEvents) Internal.onEvent + Queue.listen (e ^. internalEvents) $ + unsafeCallsFed @'Brig @"on-user-deleted-connections" Internal.onEvent let throttleMillis = fromMaybe defSqsThrottleMillis $ setSqsThrottleMillis (optSettings o) emailListener <- for (e ^. awsEnv . sesQueue) $ \q -> Async.async $ @@ -130,7 +132,8 @@ mkApp o = do middleware :: Env -> (RequestId -> Wai.Application) -> Wai.Application middleware e = - versionMiddleware -- this rewrites the request, so it must be at the top (i.e. applied last) + -- this rewrites the request, so it must be at the top (i.e. applied last) + versionMiddleware (fold (setDisabledAPIVersions (optSettings o))) . Metrics.servantPlusWAIPrometheusMiddleware (sitemap @BrigCanonicalEffects) (Proxy @ServantCombinedAPI) . GZip.gunzip . GZip.gzip GZip.def diff --git a/services/brig/src/Brig/Team/API.hs b/services/brig/src/Brig/Team/API.hs index d5814a1116..c8e2776bd3 100644 --- a/services/brig/src/Brig/Team/API.hs +++ b/services/brig/src/Brig/Team/API.hs @@ -67,6 +67,7 @@ import qualified System.Logger.Class as Log import Util.Logging (logFunction, logTeam) import Wire.API.Error import qualified Wire.API.Error.Brig as E +import Wire.API.Federation.API import Wire.API.Routes.Named import Wire.API.Routes.Public.Brig import Wire.API.Team @@ -97,12 +98,14 @@ servantAPI = :<|> Named @"get-team-size" teamSizePublic routesInternal :: - Members - '[ BlacklistStore, - GalleyProvider, - UserPendingActivationStore p - ] - r => + ( Members + '[ BlacklistStore, + GalleyProvider, + UserPendingActivationStore p + ] + r, + CallsFed 'Brig "on-user-deleted-connections" + ) => Routes a (Handler r) () routesInternal = do get "/i/teams/invitations/by-email" (continue getInvitationByEmailH) $ @@ -377,25 +380,25 @@ getInvitationByEmail email = do inv <- lift $ wrapClient $ DB.lookupInvitationByEmail HideInvitationUrl email maybe (throwStd (notFound "Invitation not found")) pure inv -suspendTeamH :: Members '[GalleyProvider] r => JSON ::: TeamId -> (Handler r) Response +suspendTeamH :: (Members '[GalleyProvider] r, CallsFed 'Brig "on-user-deleted-connections") => JSON ::: TeamId -> (Handler r) Response suspendTeamH (_ ::: tid) = do empty <$ suspendTeam tid -suspendTeam :: Members '[GalleyProvider] r => TeamId -> (Handler r) () +suspendTeam :: (Members '[GalleyProvider] r, CallsFed 'Brig "on-user-deleted-connections") => TeamId -> (Handler r) () suspendTeam tid = do changeTeamAccountStatuses tid Suspended lift $ wrapClient $ DB.deleteInvitations tid lift $ liftSem $ GalleyProvider.changeTeamStatus tid Team.Suspended Nothing unsuspendTeamH :: - Members '[GalleyProvider] r => + (Members '[GalleyProvider] r, CallsFed 'Brig "on-user-deleted-connections") => JSON ::: TeamId -> (Handler r) Response unsuspendTeamH (_ ::: tid) = do empty <$ unsuspendTeam tid unsuspendTeam :: - Members '[GalleyProvider] r => + (Members '[GalleyProvider] r, CallsFed 'Brig "on-user-deleted-connections") => TeamId -> (Handler r) () unsuspendTeam tid = do @@ -406,7 +409,7 @@ unsuspendTeam tid = do -- Internal changeTeamAccountStatuses :: - Members '[GalleyProvider] r => + (Members '[GalleyProvider] r, CallsFed 'Brig "on-user-deleted-connections") => TeamId -> AccountStatus -> (Handler r) () diff --git a/services/brig/src/Brig/User/API/Handle.hs b/services/brig/src/Brig/User/API/Handle.hs index fb3d49c4f1..7d3c37e878 100644 --- a/services/brig/src/Brig/User/API/Handle.hs +++ b/services/brig/src/Brig/User/API/Handle.hs @@ -39,13 +39,14 @@ import Imports import Network.Wai.Utilities ((!>>)) import Polysemy import qualified System.Logger.Class as Log +import Wire.API.Federation.API import Wire.API.User import qualified Wire.API.User as Public import Wire.API.User.Search import qualified Wire.API.User.Search as Public getHandleInfo :: - Members '[GalleyProvider] r => + (Members '[GalleyProvider] r, CallsFed 'Brig "get-user-by-handle", CallsFed 'Brig "get-users-by-ids") => UserId -> Qualified Handle -> (Handler r) (Maybe Public.UserProfile) @@ -57,7 +58,7 @@ getHandleInfo self handle = do getRemoteHandleInfo handle -getRemoteHandleInfo :: Remote Handle -> (Handler r) (Maybe Public.UserProfile) +getRemoteHandleInfo :: CallsFed 'Brig "get-user-by-handle" => Remote Handle -> (Handler r) (Maybe Public.UserProfile) getRemoteHandleInfo handle = do lift . Log.info $ Log.msg (Log.val "getHandleInfo - remote lookup") @@ -65,7 +66,7 @@ getRemoteHandleInfo handle = do Federation.getUserHandleInfo handle !>> fedError getLocalHandleInfo :: - Members '[GalleyProvider] r => + (Members '[GalleyProvider] r, CallsFed 'Brig "get-users-by-ids") => Local UserId -> Handle -> (Handler r) (Maybe Public.UserProfile) diff --git a/services/brig/src/Brig/User/API/Search.hs b/services/brig/src/Brig/User/API/Search.hs index 0706471c70..2713256f82 100644 --- a/services/brig/src/Brig/User/API/Search.hs +++ b/services/brig/src/Brig/User/API/Search.hs @@ -50,6 +50,7 @@ import Polysemy import System.Logger (field, msg) import System.Logger.Class (val, (~~)) import qualified System.Logger.Class as Log +import Wire.API.Federation.API import qualified Wire.API.Federation.API.Brig as FedBrig import qualified Wire.API.Federation.API.Brig as S import qualified Wire.API.Team.Permission as Public @@ -85,7 +86,7 @@ routesInternal = do -- FUTUREWORK: Consider augmenting 'SearchResult' with full user profiles -- for all results. This is tracked in https://wearezeta.atlassian.net/browse/SQCORE-599 search :: - Members '[GalleyProvider] r => + (Members '[GalleyProvider] r, CallsFed 'Brig "get-users-by-ids", CallsFed 'Brig "search-users") => UserId -> Text -> Maybe Domain -> @@ -98,7 +99,7 @@ search searcherId searchTerm maybeDomain maybeMaxResults = do then searchLocally searcherId searchTerm maybeMaxResults else searchRemotely queryDomain searchTerm -searchRemotely :: Domain -> Text -> (Handler r) (Public.SearchResult Public.Contact) +searchRemotely :: CallsFed 'Brig "search-users" => Domain -> Text -> (Handler r) (Public.SearchResult Public.Contact) searchRemotely domain searchTerm = do lift . Log.info $ msg (val "searchRemotely") @@ -120,7 +121,7 @@ searchRemotely domain searchTerm = do searchLocally :: forall r. - Members '[GalleyProvider] r => + (Members '[GalleyProvider] r, CallsFed 'Brig "get-users-by-ids") => UserId -> Text -> Maybe (Range 1 500 Int32) -> diff --git a/services/brig/src/Brig/User/Auth.hs b/services/brig/src/Brig/User/Auth.hs index f1c104e467..39fc1b7462 100644 --- a/services/brig/src/Brig/User/Auth.hs +++ b/services/brig/src/Brig/User/Auth.hs @@ -78,6 +78,7 @@ import Network.Wai.Utilities.Error ((!>>)) import Polysemy import System.Logger (field, msg, val, (~~)) import qualified System.Logger.Class as Log +import Wire.API.Federation.API import Wire.API.Team.Feature import qualified Wire.API.Team.Feature as Public import Wire.API.User @@ -134,7 +135,7 @@ lookupLoginCode phone = login :: forall r. - Members '[GalleyProvider] r => + (Members '[GalleyProvider] r, CallsFed 'Brig "on-user-deleted-connections") => Login -> CookieType -> ExceptT LoginError (AppT r) (Access ZAuth.User) @@ -251,7 +252,8 @@ renewAccess :: MonadMask m, MonadHttp m, HasRequestId m, - MonadUnliftIO m + MonadUnliftIO m, + CallsFed 'Brig "on-user-deleted-connections" ) => List1 (ZAuth.Token u) -> Maybe (ZAuth.Token a) -> @@ -289,7 +291,8 @@ catchSuspendInactiveUser :: MonadHttp m, HasRequestId m, MonadUnliftIO m, - Log.MonadLogger m + Log.MonadLogger m, + CallsFed 'Brig "on-user-deleted-connections" ) => UserId -> e -> @@ -321,7 +324,8 @@ newAccess :: MonadMask m, MonadHttp m, HasRequestId m, - MonadUnliftIO m + MonadUnliftIO m, + CallsFed 'Brig "on-user-deleted-connections" ) => UserId -> Maybe ClientId -> @@ -442,7 +446,8 @@ ssoLogin :: MonadMask m, MonadHttp m, HasRequestId m, - MonadUnliftIO m + MonadUnliftIO m, + CallsFed 'Brig "on-user-deleted-connections" ) => SsoLogin -> CookieType -> @@ -463,7 +468,7 @@ ssoLogin (SsoLogin uid label) typ = do -- | Log in as a LegalHold service, getting LegalHoldUser/Access Tokens. legalHoldLogin :: - Members '[GalleyProvider] r => + (Members '[GalleyProvider] r, CallsFed 'Brig "on-user-deleted-connections") => LegalHoldLogin -> CookieType -> ExceptT LegalHoldLoginError (AppT r) (Access ZAuth.LegalHoldUser) diff --git a/services/brig/src/Brig/User/Search/TeamUserSearch.hs b/services/brig/src/Brig/User/Search/TeamUserSearch.hs index dea5b4dd37..3731f02ab6 100644 --- a/services/brig/src/Brig/User/Search/TeamUserSearch.hs +++ b/services/brig/src/Brig/User/Search/TeamUserSearch.hs @@ -102,12 +102,24 @@ teamUserSearchQuery tid mbSearchText _mRoleFilter mSortBy mSortOrder = mbQStr ) teamFilter - ( maybe + -- in combination with pagination a non-unique search specification can lead to missing results + -- therefore we use the unique `_doc` value as a tie breaker + -- - see https://www.elastic.co/guide/en/elasticsearch/reference/6.8/search-request-sort.html for details on `_doc` + -- - see https://www.elastic.co/guide/en/elasticsearch/reference/6.8/search-request-search-after.html for details on pagination and tie breaker + -- in the latter article it "is advised to duplicate (client side or [...]) the content of the _id field + -- in another field that has doc value enabled and to use this new field as the tiebreaker for the sort" + -- so alternatively we could use the user ID as a tie breaker, but this would require a change in the index mapping + (sorting ++ sortingTieBreaker) + where + sorting :: [ES.DefaultSort] + sorting = + maybe [defaultSort SortByCreatedAt SortOrderDesc | isNothing mbQStr] (\tuSortBy -> [defaultSort tuSortBy (fromMaybe SortOrderAsc mSortOrder)]) mSortBy - ) - where + sortingTieBreaker :: [ES.DefaultSort] + sortingTieBreaker = [ES.DefaultSort (ES.FieldName "_doc") ES.Ascending Nothing Nothing Nothing Nothing] + mbQStr :: Maybe Text mbQStr = case mbSearchText of diff --git a/services/brig/src/Brig/Version.hs b/services/brig/src/Brig/Version.hs index 73f7f1fd74..acf7603d76 100644 --- a/services/brig/src/Brig/Version.hs +++ b/services/brig/src/Brig/Version.hs @@ -21,6 +21,7 @@ import Brig.API.Handler import Brig.App import Brig.Options import Control.Lens +import qualified Data.Set as Set import Imports import Servant (ServerT) import Wire.API.Routes.Named @@ -31,13 +32,16 @@ versionAPI = Named $ do fed <- view federator dom <- viewFederationDomain dev <- view (settings . enableDevelopmentVersions . to (fromMaybe False)) - let supported - | dev = supportedVersions - | otherwise = supportedVersions \\ developmentVersions + disabledVersions <- view (settings . disabledAPIVersions . traverse) + let allVersions = Set.difference (Set.fromList supportedVersions) disabledVersions + devVersions = Set.difference (Set.fromList developmentVersions) disabledVersions + supported + | dev = allVersions + | otherwise = Set.difference allVersions devVersions pure $ VersionInfo - { vinfoSupported = supported, - vinfoDevelopment = developmentVersions, + { vinfoSupported = toList supported, + vinfoDevelopment = toList devVersions, vinfoFederation = isJust fed, vinfoDomain = dom } diff --git a/services/brig/test/integration/API/OAuth.hs b/services/brig/test/integration/API/OAuth.hs index 08f2a46f17..0009ef7d66 100644 --- a/services/brig/test/integration/API/OAuth.hs +++ b/services/brig/test/integration/API/OAuth.hs @@ -73,7 +73,7 @@ tests m b n o = do test m "create token" $ testCreateAccessTokenAccessDeniedWhenDisabled o b ], testGroup "accessing a resource" $ - [ test m "success (internal," $ testAccessResourceSuccessInternal b, + [ test m "success (internal)" $ testAccessResourceSuccessInternal b, test m "success (nginz)" $ testAccessResourceSuccessNginz b n, test m "insufficient scope" $ testAccessResourceInsufficientScope b, test m "expired token" $ testAccessResourceExpiredToken o b, @@ -150,8 +150,8 @@ testCreateAccessTokenSuccess opts brig = do const 404 === statusCode const (Just "not-found") === fmap Error.label . responseJsonMaybe k <- liftIO $ readJwk (fromMaybe "path to jwk not set" (Opt.setOAuthJwkKeyPair $ Opt.optSettings opts)) <&> fromMaybe (error "invalid key") - verifiedOrError <- liftIO $ verify k (unOAuthAccessToken $ oatAccessToken accessToken) - verifiedOrErrorWithWrongKey <- liftIO $ verify wrongKey (unOAuthAccessToken $ oatAccessToken accessToken) + verifiedOrError <- liftIO $ verify k (unOAuthToken $ oatAccessToken accessToken) + verifiedOrErrorWithWrongKey <- liftIO $ verify wrongKey (unOAuthToken $ oatAccessToken accessToken) let expectedDomain = domainText $ Opt.setFederationDomain $ Opt.optSettings opts liftIO $ do isRight verifiedOrError @?= True @@ -343,9 +343,9 @@ testAccessResourceInvalidSignature opts brig = do let accessTokenRequest = OAuthAccessTokenRequest OAuthGrantTypeAuthorizationCode cid secret code redirectUrl accessToken <- createOAuthAccessToken brig accessTokenRequest key <- liftIO $ readJwk (fromMaybe "path to jwk not set" (Opt.setOAuthJwkKeyPair $ Opt.optSettings opts)) <&> fromMaybe (error "invalid key") - claimSet <- fromRight (error "token invalid") <$> liftIO (verify key (unOAuthAccessToken $ oatAccessToken accessToken)) + claimSet <- fromRight (error "token invalid") <$> liftIO (verify key (unOAuthToken $ oatAccessToken accessToken)) tokenSignedWithWrongKey <- signJwtToken wrongKey claimSet - get (brig . paths ["self"] . zOAuthHeader (OAuthAccessToken tokenSignedWithWrongKey)) !!! do + get (brig . paths ["self"] . zOAuthHeader (OAuthToken tokenSignedWithWrongKey)) !!! do const 403 === statusCode const "Access denied" === statusMessage const (Just "Invalid token: JWSError JWSInvalidSignature") === responseBody @@ -410,7 +410,7 @@ generateOAuthClientAndAuthCode brig uid scope url = do getQueryParamValue :: ByteString -> RedirectUrl -> Maybe ByteString getQueryParamValue key uri = snd <$> find ((== key) . fst) (getQueryParams uri) -signJwtToken :: JWK -> OAuthClaimSet -> Http SignedJWT +signJwtToken :: JWK -> OAuthsClaimSet -> Http SignedJWT signJwtToken key claims = do jwtOrError <- liftIO $ doSignClaims either (const $ error "jwt error") pure jwtOrError diff --git a/services/brig/test/integration/API/TeamUserSearch.hs b/services/brig/test/integration/API/TeamUserSearch.hs index ef4ab14088..a57301bb0f 100644 --- a/services/brig/test/integration/API/TeamUserSearch.hs +++ b/services/brig/test/integration/API/TeamUserSearch.hs @@ -111,7 +111,7 @@ testSort brig = do let sortByProperty' :: (TestConstraints m, Ord a) => TeamUserSearchSortBy -> (User -> a) -> TeamUserSearchSortOrder -> m () sortByProperty' = sortByProperty tid users ownerId for_ [SortOrderAsc, SortOrderDesc] $ \sortOrder -> do - -- FUTUREWORK: Test SortByRole when role is avaible in index + -- FUTUREWORK: Test SortByRole when role is available in index sortByProperty' SortByEmail userEmail sortOrder sortByProperty' SortByName userDisplayName sortOrder sortByProperty' SortByHandle (fmap fromHandle . userHandle) sortOrder @@ -144,12 +144,17 @@ testEmptyQuerySortedWithPagination :: TestConstraints m => Brig -> m () testEmptyQuerySortedWithPagination brig = do (tid, userId -> ownerId, _) <- createPopulatedBindingTeamWithNamesAndHandles brig 20 refreshIndex brig - searchResultFirst10 <- executeTeamUserSearchWithMaybeState brig tid ownerId (Just "") Nothing Nothing Nothing (Just $ unsafeRange 10) Nothing - searchResultLast11 <- executeTeamUserSearchWithMaybeState brig tid ownerId (Just "") Nothing Nothing Nothing Nothing (searchPagingState searchResultFirst10) + let teamUserSearch mPs = executeTeamUserSearchWithMaybeState brig tid ownerId (Just "") Nothing (Just SortByRole) (Just SortOrderAsc) (Just $ unsafeRange 10) mPs + searchResultFirst10 <- teamUserSearch Nothing + searchResultNext10 <- teamUserSearch (searchPagingState searchResultFirst10) + searchResultLast1 <- teamUserSearch (searchPagingState searchResultNext10) liftIO $ do searchReturned searchResultFirst10 @?= 10 searchFound searchResultFirst10 @?= 21 searchHasMore searchResultFirst10 @?= Just True - searchReturned searchResultLast11 @?= 11 - searchFound searchResultLast11 @?= 21 - searchHasMore searchResultLast11 @?= Just False + searchReturned searchResultNext10 @?= 10 + searchFound searchResultNext10 @?= 21 + searchHasMore searchResultNext10 @?= Just True + searchReturned searchResultLast1 @?= 1 + searchFound searchResultLast1 @?= 21 + searchHasMore searchResultLast1 @?= Just False diff --git a/services/brig/test/integration/API/User/Auth.hs b/services/brig/test/integration/API/User/Auth.hs index 49ec997223..834bd46a69 100644 --- a/services/brig/test/integration/API/User/Auth.hs +++ b/services/brig/test/integration/API/User/Auth.hs @@ -36,6 +36,7 @@ import Brig.ZAuth (ZAuth, runZAuth) import qualified Brig.ZAuth as ZAuth import qualified Cassandra as DB import Control.Lens (set, (^.)) +import Control.Monad.Catch (MonadCatch) import Control.Retry import Data.Aeson as Aeson hiding (json) import qualified Data.ByteString as BS @@ -134,7 +135,8 @@ tests conf m z db b g n = test m "test-login-verify6-digit-wrong-code-fails" $ testLoginVerify6DigitWrongCodeFails b g, test m "test-login-verify6-digit-missing-code-fails" $ testLoginVerify6DigitMissingCodeFails b g, test m "test-login-verify6-digit-expired-code-fails" $ testLoginVerify6DigitExpiredCodeFails b g db, - test m "test-login-verify6-digit-resend-code-success-and-rate-limiting" $ testLoginVerify6DigitResendCodeSuccessAndRateLimiting b g conf db + test m "test-login-verify6-digit-resend-code-success-and-rate-limiting" $ testLoginVerify6DigitResendCodeSuccessAndRateLimiting b g conf db, + test m "test-login-verify6-digit-limit-retries" $ testLoginVerify6DigitLimitRetries b g conf db ] ], testGroup @@ -420,10 +422,6 @@ testLoginVerify6DigitResendCodeSuccessAndRateLimiting brig galley _opts db = do (u, tid) <- createUserWithTeam' brig let Just email = userEmail u let checkLoginSucceeds body = login brig body PersistentCookie !!! const 200 === statusCode - let checkLoginFails body = - login brig body PersistentCookie !!! do - const 403 === statusCode - const (Just "code-authentication-failed") === errorLabel let getCodeFromDb = do key <- Code.mkKey (Code.ForEmail email) Just c <- Util.lookupCode db key Code.AccountLogin @@ -441,7 +439,7 @@ testLoginVerify6DigitResendCodeSuccessAndRateLimiting brig galley _opts db = do void $ retryWhileN 10 ((==) 429 . statusCode) $ Util.generateVerificationCode' brig (Public.SendVerificationCode Public.Login email) mostRecentCode <- getCodeFromDb - checkLoginFails $ + checkLoginFails brig $ PasswordLogin $ PasswordLoginData (LoginByEmail email) @@ -456,6 +454,34 @@ testLoginVerify6DigitResendCodeSuccessAndRateLimiting brig galley _opts db = do (Just defCookieLabel) (Just $ Code.codeValue mostRecentCode) +testLoginVerify6DigitLimitRetries :: Brig -> Galley -> Opts.Opts -> DB.ClientState -> Http () +testLoginVerify6DigitLimitRetries brig galley _opts db = do + (u, tid) <- createUserWithTeam' brig + let Just email = userEmail u + Util.setTeamFeatureLockStatus @Public.SndFactorPasswordChallengeConfig galley tid Public.LockStatusUnlocked + Util.setTeamSndFactorPasswordChallenge galley tid Public.FeatureStatusEnabled + Util.generateVerificationCode brig (Public.SendVerificationCode Public.Login email) + key <- Code.mkKey (Code.ForEmail email) + Just correctCode <- Util.lookupCode db key Code.AccountLogin + let wrongCode = Code.Value $ unsafeRange (fromRight undefined (validate "123456")) + -- login with wrong code should fail 3 times + forM_ [1 .. 3] $ \(_ :: Int) -> + checkLoginFails brig $ + PasswordLogin $ + PasswordLoginData + (LoginByEmail email) + defPassword + (Just defCookieLabel) + (Just wrongCode) + -- after 3 failed attempts, login with correct code should fail as well + checkLoginFails brig $ + PasswordLogin $ + PasswordLoginData + (LoginByEmail email) + defPassword + (Just defCookieLabel) + (Just (Code.codeValue correctCode)) + -- @SF.Channel @TSFI.RESTfulAPI @S2 -- -- Test that login fails with wrong second factor email verification code @@ -463,16 +489,11 @@ testLoginVerify6DigitWrongCodeFails :: Brig -> Galley -> Http () testLoginVerify6DigitWrongCodeFails brig galley = do (u, tid) <- createUserWithTeam' brig let Just email = userEmail u - let checkLoginFails body = - login brig body PersistentCookie !!! do - const 403 === statusCode - const (Just "code-authentication-failed") === errorLabel - Util.setTeamFeatureLockStatus @Public.SndFactorPasswordChallengeConfig galley tid Public.LockStatusUnlocked Util.setTeamSndFactorPasswordChallenge galley tid Public.FeatureStatusEnabled Util.generateVerificationCode brig (Public.SendVerificationCode Public.Login email) let wrongCode = Code.Value $ unsafeRange (fromRight undefined (validate "123456")) - checkLoginFails $ + checkLoginFails brig $ PasswordLogin $ PasswordLoginData (LoginByEmail email) @@ -489,21 +510,19 @@ testLoginVerify6DigitMissingCodeFails :: Brig -> Galley -> Http () testLoginVerify6DigitMissingCodeFails brig galley = do (u, tid) <- createUserWithTeam' brig let Just email = userEmail u - let checkLoginFails body = - login brig body PersistentCookie !!! do - const 403 === statusCode - const (Just "code-authentication-required") === errorLabel - Util.setTeamFeatureLockStatus @Public.SndFactorPasswordChallengeConfig galley tid Public.LockStatusUnlocked Util.setTeamSndFactorPasswordChallenge galley tid Public.FeatureStatusEnabled Util.generateVerificationCode brig (Public.SendVerificationCode Public.Login email) - checkLoginFails $ - PasswordLogin $ - PasswordLoginData - (LoginByEmail email) - defPassword - (Just defCookieLabel) - Nothing + let body = + PasswordLogin $ + PasswordLoginData + (LoginByEmail email) + defPassword + (Just defCookieLabel) + Nothing + login brig body PersistentCookie !!! do + const 403 === statusCode + const (Just "code-authentication-required") === errorLabel -- @END @@ -514,11 +533,6 @@ testLoginVerify6DigitExpiredCodeFails :: Brig -> Galley -> DB.ClientState -> Htt testLoginVerify6DigitExpiredCodeFails brig galley db = do (u, tid) <- createUserWithTeam' brig let Just email = userEmail u - let checkLoginFails body = - login brig body PersistentCookie !!! do - const 403 === statusCode - const (Just "code-authentication-failed") === errorLabel - Util.setTeamFeatureLockStatus @Public.SndFactorPasswordChallengeConfig galley tid Public.LockStatusUnlocked Util.setTeamSndFactorPasswordChallenge galley tid Public.FeatureStatusEnabled Util.generateVerificationCode brig (Public.SendVerificationCode Public.Login email) @@ -526,7 +540,7 @@ testLoginVerify6DigitExpiredCodeFails brig galley db = do Just vcode <- Util.lookupCode db key Code.AccountLogin -- wait > 5 sec for the code to expire (assumption: setVerificationTimeout in brig.integration.yaml is set to <= 5 sec) threadDelay $ (5 * 1000000) + 600000 - checkLoginFails $ + checkLoginFails brig $ PasswordLogin $ PasswordLoginData (LoginByEmail email) @@ -1465,3 +1479,9 @@ remJson p l ids = wait :: MonadIO m => m () wait = liftIO $ threadDelay 1000000 + +checkLoginFails :: (MonadHttp m, MonadIO m, MonadCatch m) => Brig -> Login -> m () +checkLoginFails brig body = do + login brig body PersistentCookie !!! do + const 403 === statusCode + const (Just "code-authentication-failed") === errorLabel diff --git a/services/brig/test/integration/API/Version.hs b/services/brig/test/integration/API/Version.hs index 4995d8754e..bd4945e879 100644 --- a/services/brig/test/integration/API/Version.hs +++ b/services/brig/test/integration/API/Version.hs @@ -20,13 +20,17 @@ module API.Version (tests) where import Bilge import Bilge.Assert import Brig.Options +import qualified Brig.Options as Opt import Control.Lens ((?~)) +import Control.Monad.Catch (MonadCatch) +import qualified Data.Set as Set import Imports import qualified Network.Wai.Utilities.Error as Wai import Test.Tasty import Test.Tasty.HUnit import Util import Wire.API.Routes.Version +import Wire.API.User tests :: Manager -> Opts -> Brig -> TestTree tests p opts brig = @@ -36,7 +40,10 @@ tests p opts brig = test p "GET /v1/api-version" $ testVersionV1 brig, test p "GET /api-version (with dev)" $ testDevVersion opts brig, test p "GET /v500/api-version" $ testUnsupportedVersion brig, - test p "GET /api-version (federation info)" $ testFederationDomain opts brig + test p "GET /api-version (federation info)" $ testFederationDomain opts brig, + test p "Disabled version is unsupported" $ testDisabledVersionIsUnsupported opts brig, + test p "Disabled version is not advertised" $ testVersionDisabledSupportedVersion opts brig, + test p "Disabled dev version is not advertised" $ testVersionDisabledDevelopmentVersion opts brig ] testVersion :: Brig -> Http () @@ -86,3 +93,68 @@ testFederationDomain opts brig = do liftIO $ do vinfoFederation vinfo @?= True vinfoDomain vinfo @?= domain + +testDisabledVersionIsUnsupported :: Opts -> Brig -> Http () +testDisabledVersionIsUnsupported opts brig = do + uid <- userId <$> randomUser brig + + get (apiVersion "v2" . brig . path "/self" . zUser uid) + !!! const 200 === statusCode + + withSettingsOverrides + ( opts + & Opt.optionSettings + . Opt.disabledAPIVersions + ?~ Set.fromList [V2] + ) + $ do + err <- + responseJsonError + =<< get (apiVersion "v2" . brig . path "/self" . zUser uid) + Brig -> Http () +testVersionDisabledSupportedVersion opts brig = do + vinfo <- getVersionInfo brig + liftIO $ filter (== V2) (vinfoSupported vinfo) @?= [V2] + disabledVersionIsNotAdvertised opts brig V2 + +testVersionDisabledDevelopmentVersion :: Opts -> Brig -> Http () +testVersionDisabledDevelopmentVersion opts brig = do + vinfo <- getVersionInfo brig + for_ (listToMaybe (vinfoDevelopment vinfo)) $ \devVersion -> do + liftIO $ filter (== devVersion) (vinfoDevelopment vinfo) @?= [devVersion] + disabledVersionIsNotAdvertised opts brig devVersion + +disabledVersionIsNotAdvertised :: Opts -> Brig -> Version -> Http () +disabledVersionIsNotAdvertised opts brig version = + withSettingsOverrides + ( opts + & Opt.optionSettings + . Opt.disabledAPIVersions + ?~ Set.fromList [version] + ) + $ do + vinfo <- getVersionInfo brig + liftIO $ filter (== version) (vinfoSupported vinfo) @?= [] + liftIO $ filter (== version) (vinfoDevelopment vinfo) @?= [] + +getVersionInfo :: + (MonadIO m, MonadCatch m, MonadFail m, MonadHttp m, HasCallStack) => + Brig -> + m VersionInfo +getVersionInfo brig = + responseJsonError + =<< get (unversioned . brig . path "/api-version") + FedClient comp -> @@ -1297,7 +1297,7 @@ toServantResponse res = createWaiTestFedClient :: forall (name :: Symbol) comp api. - ( HasFedEndpoint comp api name, + ( HasUnsafeFedEndpoint comp api name, Servant.HasClient WaiTestFedClient api ) => Servant.Client WaiTestFedClient api diff --git a/services/cannon/src/Cannon/Options.hs b/services/cannon/src/Cannon/Options.hs index e2117ee8c3..bce5cba50c 100644 --- a/services/cannon/src/Cannon/Options.hs +++ b/services/cannon/src/Cannon/Options.hs @@ -34,6 +34,7 @@ module Cannon.Options gracePeriodSeconds, millisecondsBetweenBatches, minBatchSize, + disabledAPIVersions, DrainOpts, ) where @@ -42,6 +43,7 @@ import Control.Lens (makeFields) import Data.Aeson.APIFieldJsonTH import Imports import System.Logger.Extended (Level, LogFormat) +import Wire.API.Routes.Version data Cannon = Cannon { _cannonHost :: !String, @@ -88,7 +90,8 @@ data Opts = Opts _optsLogLevel :: !Level, _optsLogNetStrings :: !(Maybe (Last Bool)), _optsLogFormat :: !(Maybe (Last LogFormat)), - _optsDrainOpts :: DrainOpts + _optsDrainOpts :: DrainOpts, + _optsDisabledAPIVersions :: Maybe (Set Version) } deriving (Eq, Show, Generic) diff --git a/services/cannon/src/Cannon/Run.hs b/services/cannon/src/Cannon/Run.hs index c5a30103ab..8fb26e7e4f 100644 --- a/services/cannon/src/Cannon/Run.hs +++ b/services/cannon/src/Cannon/Run.hs @@ -77,7 +77,7 @@ run o = do s <- newSettings $ Server (o ^. cannon . host) (o ^. cannon . port) (applog e) m (Just idleTimeout) let middleware :: Wai.Middleware middleware = - versionMiddleware + versionMiddleware (fold (o ^. disabledAPIVersions)) . servantPrometheusMiddleware (Proxy @CombinedAPI) . Gzip.gzip Gzip.def . catchErrors g [Right m] diff --git a/services/cargohold/src/CargoHold/API/Public.hs b/services/cargohold/src/CargoHold/API/Public.hs index bbbbb5091d..67821235c9 100644 --- a/services/cargohold/src/CargoHold/API/Public.hs +++ b/services/cargohold/src/CargoHold/API/Public.hs @@ -35,6 +35,7 @@ import Servant.API import Servant.Server hiding (Handler) import URI.ByteString import Wire.API.Asset +import Wire.API.Federation.API import Wire.API.Routes.AssetBody import Wire.API.Routes.Internal.Cargohold import Wire.API.Routes.Public.Cargohold @@ -57,12 +58,14 @@ servantSitemap = providerAPI :: forall tag. tag ~ 'ProviderPrincipalTag => ServerT (BaseAPIv3 tag) Handler providerAPI = uploadAssetV3 @tag :<|> downloadAssetV3 @tag :<|> deleteAssetV3 @tag legacyAPI = legacyDownloadPlain :<|> legacyDownloadPlain :<|> legacyDownloadOtr - qualifiedAPI = downloadAssetV4 :<|> deleteAssetV4 + qualifiedAPI :: ServerT QualifiedAPI Handler + qualifiedAPI = callsFed downloadAssetV4 :<|> deleteAssetV4 + mainAPI :: ServerT MainAPI Handler mainAPI = renewTokenV3 :<|> deleteTokenV3 :<|> uploadAssetV3 @'UserPrincipalTag - :<|> downloadAssetV4 + :<|> callsFed downloadAssetV4 :<|> deleteAssetV4 internalSitemap :: ServerT InternalAPI Handler @@ -147,6 +150,7 @@ downloadAssetV3 usr key tok1 tok2 = do AssetLocation <$$> V3.download (mkPrincipal usr) key (tok1 <|> tok2) downloadAssetV4 :: + (CallsFed 'Cargohold "get-asset", CallsFed 'Cargohold "stream-asset") => Local UserId -> Qualified AssetKey -> Maybe AssetToken -> diff --git a/services/cargohold/src/CargoHold/AWS.hs b/services/cargohold/src/CargoHold/AWS.hs index f7d0fd2df4..e00063457a 100644 --- a/services/cargohold/src/CargoHold/AWS.hs +++ b/services/cargohold/src/CargoHold/AWS.hs @@ -70,9 +70,10 @@ data Env = Env makeLenses ''Env -- | Override the endpoint in the '_amazonkaEnv' with '_amazonkaDownloadEndpoint'. +-- TODO: Choose the correct s3 addressing style amazonkaEnvWithDownloadEndpoint :: Env -> AWS.Env amazonkaEnvWithDownloadEndpoint e = - AWS.override (setAWSEndpoint (e ^. amazonkaDownloadEndpoint)) (e ^. amazonkaEnv) + AWS.overrideService (setAWSEndpoint (e ^. amazonkaDownloadEndpoint)) (e ^. amazonkaEnv) setAWSEndpoint :: AWSEndpoint -> AWS.Service -> AWS.Service setAWSEndpoint e = AWS.setEndpoint (_awsSecure e) (_awsHost e) (_awsPort e) @@ -100,6 +101,7 @@ mkEnv :: Logger -> -- | S3 endpoint AWSEndpoint -> + AWS.S3AddressingStyle -> -- | Endpoint for downloading assets (for the external world) AWSEndpoint -> -- | Bucket @@ -107,9 +109,9 @@ mkEnv :: Maybe CloudFrontOpts -> Manager -> IO Env -mkEnv lgr s3End s3Download bucket cfOpts mgr = do +mkEnv lgr s3End s3AddrStyle s3Download bucket cfOpts mgr = do let g = Logger.clone (Just "aws.cargohold") lgr - e <- mkAwsEnv g (setAWSEndpoint s3End S3.defaultService) + e <- mkAwsEnv g (setAWSEndpoint s3End (S3.defaultService & AWS.service_s3AddressingStyle .~ s3AddrStyle)) cf <- mkCfEnv cfOpts pure (Env g bucket e s3Download cf) where @@ -118,11 +120,11 @@ mkEnv lgr s3End s3Download bucket cfOpts mgr = do mkAwsEnv g s3 = do baseEnv <- AWS.newEnv AWS.discover - <&> AWS.configure s3 + <&> AWS.configureService s3 pure $ baseEnv - { AWS.envLogger = awsLogger g, - AWS.envManager = mgr + { AWS.logger = awsLogger g, + AWS.manager = mgr } awsLogger g l = Logger.log g (mapLevel l) . Log.msg . toLazyByteString mapLevel AWS.Info = Logger.Info @@ -222,7 +224,7 @@ canRetry :: MonadIO m => Either AWS.Error a -> m Bool canRetry (Right _) = pure False canRetry (Left e) = case e of AWS.TransportError (HttpExceptionRequest _ ResponseTimeout) -> pure True - AWS.ServiceError se | se ^. AWS.serviceCode == AWS.ErrorCode "RequestThrottled" -> pure True + AWS.ServiceError se | se ^. AWS.serviceError_code == AWS.ErrorCode "RequestThrottled" -> pure True _ -> pure False retry5x :: (Monad m) => RetryPolicyM m diff --git a/services/cargohold/src/CargoHold/App.hs b/services/cargohold/src/CargoHold/App.hs index 83226c45cf..b123ed739f 100644 --- a/services/cargohold/src/CargoHold/App.hs +++ b/services/cargohold/src/CargoHold/App.hs @@ -46,6 +46,7 @@ module CargoHold.App ) where +import Amazonka (S3AddressingStyle (S3AddressingStylePath)) import Bilge (Manager, MonadHttp, RequestId (..), newManager, withResponse) import qualified Bilge import Bilge.RPC (HasRequestId (..)) @@ -97,9 +98,10 @@ newEnv o = do pure $ Env ama met lgr mgr def o loc initAws :: AWSOpts -> Logger -> Manager -> IO AWS.Env -initAws o l = AWS.mkEnv l (o ^. awsS3Endpoint) downloadEndpoint (o ^. awsS3Bucket) (o ^. awsCloudFront) +initAws o l = AWS.mkEnv l (o ^. awsS3Endpoint) addrStyle downloadEndpoint (o ^. awsS3Bucket) (o ^. awsCloudFront) where downloadEndpoint = fromMaybe (o ^. awsS3Endpoint) (o ^. awsS3DownloadEndpoint) + addrStyle = maybe S3AddressingStylePath unwrapS3AddressingStyle (o ^. awsS3AddressingStyle) initHttpManager :: Maybe S3Compatibility -> IO Manager initHttpManager s3Compat = diff --git a/services/cargohold/src/CargoHold/Federation.hs b/services/cargohold/src/CargoHold/Federation.hs index 6949929ea8..94a8bebc7e 100644 --- a/services/cargohold/src/CargoHold/Federation.hs +++ b/services/cargohold/src/CargoHold/Federation.hs @@ -48,6 +48,7 @@ import Wire.API.Federation.Error -- is streamed back through our outward federator, as well as the remote one. downloadRemoteAsset :: + (CallsFed 'Cargohold "get-asset", CallsFed 'Cargohold "stream-asset") => Local UserId -> Remote AssetKey -> Maybe AssetToken -> diff --git a/services/cargohold/src/CargoHold/Options.hs b/services/cargohold/src/CargoHold/Options.hs index 3f709c1454..c6c7076e99 100644 --- a/services/cargohold/src/CargoHold/Options.hs +++ b/services/cargohold/src/CargoHold/Options.hs @@ -20,6 +20,7 @@ module CargoHold.Options where +import Amazonka (S3AddressingStyle (..)) import qualified CargoHold.CloudFront as CF import Control.Lens hiding (Level) import Data.Aeson (FromJSON (..), withText) @@ -29,6 +30,7 @@ import Imports import System.Logger.Extended (Level, LogFormat) import Util.Options import Util.Options.Common +import Wire.API.Routes.Version -- | AWS CloudFront settings. data CloudFrontOpts = CloudFrontOpts @@ -45,8 +47,48 @@ deriveFromJSON toOptionFieldName ''CloudFrontOpts makeLenses ''CloudFrontOpts +newtype OptS3AddressingStyle = OptS3AddressingStyle + { unwrapS3AddressingStyle :: S3AddressingStyle + } + deriving (Show) + +instance FromJSON OptS3AddressingStyle where + parseJSON = + withText "S3AddressingStyle" $ + fmap OptS3AddressingStyle . \case + "auto" -> pure S3AddressingStyleAuto + "path" -> pure S3AddressingStylePath + "virtual" -> pure S3AddressingStyleVirtual + other -> fail $ "invalid S3AddressingStyle: " <> show other + data AWSOpts = AWSOpts { _awsS3Endpoint :: !AWSEndpoint, + -- | S3 can either by addressed in path style, i.e. + -- https:////, or vhost style, i.e. + -- https://./. AWS's S3 offering has + -- deprecated path style addressing for S3 and completely disabled it for + -- buckets created after 30 Sep 2020: + -- https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/ + -- + -- However other object storage providers (specially self-deployed ones like + -- MinIO) may not support vhost style addressing yet (or ever?). Users of + -- such buckets should configure this option to "path". + -- + -- Installations using S3 service provided by AWS, should use "auto", this + -- option will ensure that vhost style is only used when it is possible to + -- construct a valid hostname from the bucket name and the bucket name + -- doesn't contain a '.'. Having a '.' in the bucket name causes TLS + -- validation to fail, hence it is not used by default. + -- + -- Using "virtual" as an option is only useful in situations where vhost + -- style addressing must be used even if it is not possible to construct a + -- valid hostname from the bucket name or the S3 service provider can ensure + -- correct certificate is issued for bucket which contain one or more '.'s + -- in the name. + -- + -- When this option is unspecified, we default to path style addressing to + -- ensure smooth transition for older deployments. + _awsS3AddressingStyle :: !(Maybe OptS3AddressingStyle), -- | S3 endpoint for generating download links. Useful if Cargohold is configured to use -- an S3 replacement running inside the internal network (in which case internally we -- would use one hostname for S3, and when generating an asset link for a client app, we @@ -91,7 +133,8 @@ data Settings = Settings -- Remember to keep it the same in Galley and in Brig. -- This is referred to as the 'backend domain' in the public documentation; See -- https://docs.wire.com/how-to/install/configure-federation.html#choose-a-backend-domain-name - _setFederationDomain :: !Domain + _setFederationDomain :: !Domain, + _setDisabledAPIVersions :: !(Maybe (Set Version)) } deriving (Show, Generic) diff --git a/services/cargohold/src/CargoHold/Run.hs b/services/cargohold/src/CargoHold/Run.hs index 09677b898e..ae393ced1c 100644 --- a/services/cargohold/src/CargoHold/Run.hs +++ b/services/cargohold/src/CargoHold/Run.hs @@ -78,7 +78,7 @@ mkApp o = Codensity $ \k -> where middleware :: Env -> Wai.Middleware middleware e = - versionMiddleware + versionMiddleware (fold (o ^. optSettings . setDisabledAPIVersions)) . servantPrometheusMiddleware (Proxy @CombinedAPI) . GZip.gzip GZip.def . catchErrors (e ^. appLogger) [Right $ e ^. metrics] diff --git a/services/cargohold/src/CargoHold/S3.hs b/services/cargohold/src/CargoHold/S3.hs index 404abb79e8..29137efe3a 100644 --- a/services/cargohold/src/CargoHold/S3.hs +++ b/services/cargohold/src/CargoHold/S3.hs @@ -36,7 +36,7 @@ module CargoHold.S3 ) where -import Amazonka hiding (Error, ToByteString, (.=)) +import Amazonka hiding (Error) import Amazonka.S3 import Amazonka.S3.Lens import CargoHold.API.Error @@ -145,7 +145,7 @@ downloadV3 :: ExceptT Error App (ConduitM () ByteString (ResourceT IO) ()) downloadV3 (s3Key . mkKey -> key) = do env <- view aws - pure . flattenResourceT $ _streamBody . view getObjectResponse_body <$> AWS.execStream env req + pure . flattenResourceT $ view (getObjectResponse_body . _ResponseBody) <$> AWS.execStream env req where req :: Text -> GetObject req b = diff --git a/services/cargohold/test/integration/API/Federation.hs b/services/cargohold/test/integration/API/Federation.hs index 686adc1aa8..6e25283ea0 100644 --- a/services/cargohold/test/integration/API/Federation.hs +++ b/services/cargohold/test/integration/API/Federation.hs @@ -83,7 +83,7 @@ testGetAssetAvailable isPublicAsset = do } ok <- withFederationClient $ - gaAvailable <$> runFederationClient (fedClientIn @'Cargohold @"get-asset" ga) + gaAvailable <$> runFederationClient (unsafeFedClientIn @'Cargohold @"get-asset" ga) -- check that asset is available liftIO $ ok @?= True @@ -103,7 +103,7 @@ testGetAssetNotAvailable = do } ok <- withFederationClient $ - gaAvailable <$> runFederationClient (fedClientIn @'Cargohold @"get-asset" ga) + gaAvailable <$> runFederationClient (unsafeFedClientIn @'Cargohold @"get-asset" ga) -- check that asset is not available liftIO $ ok @?= False @@ -130,7 +130,7 @@ testGetAssetWrongToken = do } ok <- withFederationClient $ - gaAvailable <$> runFederationClient (fedClientIn @'Cargohold @"get-asset" ga) + gaAvailable <$> runFederationClient (unsafeFedClientIn @'Cargohold @"get-asset" ga) -- check that asset is not available liftIO $ ok @?= False @@ -161,7 +161,7 @@ testLargeAsset = do gaKey = qUnqualified key } chunks <- withFederationClient $ do - source <- getAssetSource <$> runFederationClient (fedClientIn @'Cargohold @"stream-asset" ga) + source <- getAssetSource <$> runFederationClient (unsafeFedClientIn @'Cargohold @"stream-asset" ga) liftIO . runResourceT $ connect source sinkList liftIO $ do let minNumChunks = 8 @@ -193,7 +193,7 @@ testStreamAsset = do gaKey = qUnqualified key } respBody <- withFederationClient $ do - source <- getAssetSource <$> runFederationClient (fedClientIn @'Cargohold @"stream-asset" ga) + source <- getAssetSource <$> runFederationClient (unsafeFedClientIn @'Cargohold @"stream-asset" ga) liftIO . runResourceT $ connect source sinkLazy liftIO $ respBody @?= "Hello World" @@ -211,7 +211,7 @@ testStreamAssetNotAvailable = do gaKey = key } err <- withFederationError $ do - runFederationClient (fedClientIn @'Cargohold @"stream-asset" ga) + runFederationClient (unsafeFedClientIn @'Cargohold @"stream-asset" ga) liftIO $ do Wai.code err @?= HTTP.notFound404 Wai.label err @?= "not-found" @@ -237,7 +237,7 @@ testStreamAssetWrongToken = do gaKey = qUnqualified key } err <- withFederationError $ do - runFederationClient (fedClientIn @'Cargohold @"stream-asset" ga) + runFederationClient (unsafeFedClientIn @'Cargohold @"stream-asset" ga) liftIO $ do Wai.code err @?= HTTP.notFound404 Wai.label err @?= "not-found" diff --git a/services/federator/test/unit/Test/Federator/Client.hs b/services/federator/test/unit/Test/Federator/Client.hs index 61825a7e17..0a99e08f15 100644 --- a/services/federator/test/unit/Test/Federator/Client.hs +++ b/services/federator/test/unit/Test/Federator/Client.hs @@ -14,6 +14,7 @@ -- -- You should have received a copy of the GNU Affero General Public License along -- with this program. If not, see . +{-# OPTIONS_GHC -Wno-orphans #-} module Test.Federator.Client (tests) where @@ -50,6 +51,8 @@ import Wire.API.Federation.Component import Wire.API.Federation.Error import Wire.API.User (UserProfile) +instance CallsFed comp name + targetDomain :: Domain targetDomain = Domain "target.example.com" diff --git a/services/galley/default.nix b/services/galley/default.nix index cf8abcd206..6b48dee10b 100644 --- a/services/galley/default.nix +++ b/services/galley/default.nix @@ -348,6 +348,8 @@ mkDerivation { http-types imports lens + polysemy + polysemy-wire-zoo QuickCheck raw-strings-qq safe diff --git a/services/galley/galley.cabal b/services/galley/galley.cabal index 69cb442338..955c3ac159 100644 --- a/services/galley/galley.cabal +++ b/services/galley/galley.cabal @@ -104,7 +104,6 @@ library Galley.Effects.MemberStore Galley.Effects.ProposalStore Galley.Effects.Queue - Galley.Effects.RemoteConversationListStore Galley.Effects.SearchVisibilityStore Galley.Effects.ServiceStore Galley.Effects.SparAccess @@ -831,6 +830,8 @@ test-suite galley-tests , http-types , imports , lens + , polysemy + , polysemy-wire-zoo , QuickCheck , raw-strings-qq >=1.0 , safe >=0.3 diff --git a/services/galley/src/Galley/API/Action.hs b/services/galley/src/Galley/API/Action.hs index eb5f17d2f0..03a344533b 100644 --- a/services/galley/src/Galley/API/Action.hs +++ b/services/galley/src/Galley/API/Action.hs @@ -86,7 +86,7 @@ import Wire.API.Conversation.Role import Wire.API.Error import Wire.API.Error.Galley import Wire.API.Event.Conversation -import Wire.API.Federation.API (Component (Galley), fedClient) +import Wire.API.Federation.API (CallsFed, Component (Galley), fedClient) import Wire.API.Federation.API.Galley import Wire.API.Federation.Error import Wire.API.Team.LegalHold @@ -276,7 +276,11 @@ ensureAllowed tag loc action conv origUser = do -- and also returns the (possible modified) action that was performed performAction :: forall tag r. - (HasConversationActionEffects tag r) => + ( HasConversationActionEffects tag r, + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-new-remote-conversation" + ) => Sing tag -> Qualified UserId -> Local Conversation -> @@ -344,7 +348,11 @@ performAction tag origUser lconv action = do pure (bm, act) performConversationJoin :: - (HasConversationActionEffects 'ConversationJoinTag r) => + ( HasConversationActionEffects 'ConversationJoinTag r, + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-new-remote-conversation" + ) => Qualified UserId -> Local Conversation -> ConversationJoin -> @@ -470,7 +478,11 @@ performConversationJoin qusr lconv (ConversationJoin invited role) = do checkLHPolicyConflictsRemote _remotes = pure () performConversationAccessData :: - (HasConversationActionEffects 'ConversationAccessDataTag r) => + ( HasConversationActionEffects 'ConversationAccessDataTag r, + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-new-remote-conversation" + ) => Qualified UserId -> Local Conversation -> ConversationAccessData -> @@ -568,7 +580,10 @@ updateLocalConversation :: ] r, HasConversationActionEffects tag r, - SingI tag + SingI tag, + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation", + CallsFed 'Galley "on-conversation-updated" ) => Local ConvId -> Qualified UserId -> @@ -605,7 +620,10 @@ updateLocalConversationUnchecked :: Member FederatorAccess r, Member GundeckAccess r, Member (Input UTCTime) r, - HasConversationActionEffects tag r + HasConversationActionEffects tag r, + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation", + CallsFed 'Galley "on-conversation-updated" ) => Local Conversation -> Qualified UserId -> @@ -681,7 +699,10 @@ addMembersToLocalConversation lcnv users role = do notifyConversationAction :: forall tag r. - Members '[FederatorAccess, ExternalAccess, GundeckAccess, Input UTCTime] r => + ( Members '[FederatorAccess, ExternalAccess, GundeckAccess, Input UTCTime] r, + CallsFed 'Galley "on-new-remote-conversation", + CallsFed 'Galley "on-conversation-updated" + ) => Sing tag -> Qualified UserId -> Bool -> @@ -797,7 +818,10 @@ kickMember :: Member (Input UTCTime) r, Member (Input Env) r, Member MemberStore r, - Member TinyLog r + Member TinyLog r, + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-new-remote-conversation" ) => Qualified UserId -> Local Conversation -> diff --git a/services/galley/src/Galley/API/Clients.hs b/services/galley/src/Galley/API/Clients.hs index 95aacd8275..d671b33621 100644 --- a/services/galley/src/Galley/API/Clients.hs +++ b/services/galley/src/Galley/API/Clients.hs @@ -104,7 +104,9 @@ rmClientH :: ProposalStore, P.TinyLog ] - r + r, + CallsFed 'Galley "on-client-removed", + CallsFed 'Galley "on-mls-message-sent" ) => UserId ::: ClientId -> Sem r Response diff --git a/services/galley/src/Galley/API/Create.hs b/services/galley/src/Galley/API/Create.hs index 99b00d9c70..2995ca3f07 100644 --- a/services/galley/src/Galley/API/Create.hs +++ b/services/galley/src/Galley/API/Create.hs @@ -71,6 +71,7 @@ import Wire.API.Conversation.Protocol import Wire.API.Error import Wire.API.Error.Galley import Wire.API.Event.Conversation +import Wire.API.Federation.API import Wire.API.Federation.Error import Wire.API.Routes.Public.Galley.Conversation import Wire.API.Routes.Public.Util @@ -84,29 +85,31 @@ import Wire.API.Team.Permission hiding (self) -- | The public-facing endpoint for creating group conversations. createGroupConversation :: - Members - '[ BrigAccess, - ConversationStore, - MemberStore, - ErrorS 'ConvAccessDenied, - Error InternalError, - Error InvalidInput, - ErrorS 'NotATeamMember, - ErrorS OperationDenied, - ErrorS 'NotConnected, - ErrorS 'MLSNotEnabled, - ErrorS 'MLSNonEmptyMemberList, - ErrorS 'MissingLegalholdConsent, - FederatorAccess, - GundeckAccess, - Input Env, - Input Opts, - Input UTCTime, - LegalHoldStore, - TeamStore, - P.TinyLog - ] - r => + ( Members + '[ BrigAccess, + ConversationStore, + MemberStore, + ErrorS 'ConvAccessDenied, + Error InternalError, + Error InvalidInput, + ErrorS 'NotATeamMember, + ErrorS OperationDenied, + ErrorS 'NotConnected, + ErrorS 'MLSNotEnabled, + ErrorS 'MLSNonEmptyMemberList, + ErrorS 'MissingLegalholdConsent, + FederatorAccess, + GundeckAccess, + Input Env, + Input Opts, + Input UTCTime, + LegalHoldStore, + TeamStore, + P.TinyLog + ] + r, + CallsFed 'Galley "on-conversation-created" + ) => Local UserId -> ConnId -> NewConv -> @@ -218,29 +221,31 @@ createProteusSelfConversation lusr = do createOne2OneConversation :: forall r. - Members - '[ BrigAccess, - ConversationStore, - ErrorS 'ConvAccessDenied, - Error FederationError, - Error InternalError, - Error InvalidInput, - ErrorS 'ConvAccessDenied, - ErrorS 'NotATeamMember, - ErrorS 'NonBindingTeam, - ErrorS 'NoBindingTeamMembers, - ErrorS OperationDenied, - ErrorS 'TeamNotFound, - ErrorS 'InvalidOperation, - ErrorS 'NotConnected, - ErrorS 'MissingLegalholdConsent, - FederatorAccess, - GundeckAccess, - Input UTCTime, - TeamStore, - P.TinyLog - ] - r => + ( Members + '[ BrigAccess, + ConversationStore, + ErrorS 'ConvAccessDenied, + Error FederationError, + Error InternalError, + Error InvalidInput, + ErrorS 'ConvAccessDenied, + ErrorS 'NotATeamMember, + ErrorS 'NonBindingTeam, + ErrorS 'NoBindingTeamMembers, + ErrorS OperationDenied, + ErrorS 'TeamNotFound, + ErrorS 'InvalidOperation, + ErrorS 'NotConnected, + ErrorS 'MissingLegalholdConsent, + FederatorAccess, + GundeckAccess, + Input UTCTime, + TeamStore, + P.TinyLog + ] + r, + CallsFed 'Galley "on-conversation-created" + ) => Local UserId -> ConnId -> NewConv -> @@ -285,16 +290,18 @@ createOne2OneConversation lusr zcon j = do Nothing -> throwS @'TeamNotFound createLegacyOne2OneConversationUnchecked :: - Members - '[ ConversationStore, - Error InternalError, - Error InvalidInput, - FederatorAccess, - GundeckAccess, - Input UTCTime, - P.TinyLog - ] - r => + ( Members + '[ ConversationStore, + Error InternalError, + Error InvalidInput, + FederatorAccess, + GundeckAccess, + Input UTCTime, + P.TinyLog + ] + r, + CallsFed 'Galley "on-conversation-created" + ) => Local UserId -> ConnId -> Maybe (Range 1 256 Text) -> @@ -324,17 +331,19 @@ createLegacyOne2OneConversationUnchecked self zcon name mtid other = do conversationCreated self c createOne2OneConversationUnchecked :: - Members - '[ ConversationStore, - Error FederationError, - Error InternalError, - ErrorS 'MissingLegalholdConsent, - FederatorAccess, - GundeckAccess, - Input UTCTime, - P.TinyLog - ] - r => + ( Members + '[ ConversationStore, + Error FederationError, + Error InternalError, + ErrorS 'MissingLegalholdConsent, + FederatorAccess, + GundeckAccess, + Input UTCTime, + P.TinyLog + ] + r, + CallsFed 'Galley "on-conversation-created" + ) => Local UserId -> ConnId -> Maybe (Range 1 256 Text) -> @@ -350,16 +359,18 @@ createOne2OneConversationUnchecked self zcon name mtid other = do create (one2OneConvId (tUntagged self) other) self zcon name mtid other createOne2OneConversationLocally :: - Members - '[ ConversationStore, - Error InternalError, - ErrorS 'MissingLegalholdConsent, - FederatorAccess, - GundeckAccess, - Input UTCTime, - P.TinyLog - ] - r => + ( Members + '[ ConversationStore, + Error InternalError, + ErrorS 'MissingLegalholdConsent, + FederatorAccess, + GundeckAccess, + Input UTCTime, + P.TinyLog + ] + r, + CallsFed 'Galley "on-conversation-created" + ) => Local ConvId -> Local UserId -> ConnId -> @@ -401,21 +412,23 @@ createOne2OneConversationRemotely _ _ _ _ _ _ = throw FederationNotImplemented createConnectConversation :: - Members - '[ ConversationStore, - ErrorS 'ConvNotFound, - Error FederationError, - Error InternalError, - Error InvalidInput, - ErrorS 'InvalidOperation, - ErrorS 'NotConnected, - FederatorAccess, - GundeckAccess, - Input UTCTime, - MemberStore, - P.TinyLog - ] - r => + ( Members + '[ ConversationStore, + ErrorS 'ConvNotFound, + Error FederationError, + Error InternalError, + Error InvalidInput, + ErrorS 'InvalidOperation, + ErrorS 'NotConnected, + FederatorAccess, + GundeckAccess, + Input UTCTime, + MemberStore, + P.TinyLog + ] + r, + CallsFed 'Galley "on-conversation-created" + ) => Local UserId -> Maybe ConnId -> Connect -> @@ -538,7 +551,9 @@ conversationCreated :: conversationCreated lusr cnv = Created <$> conversationView lusr cnv notifyCreatedConversation :: - Members '[Error InternalError, FederatorAccess, GundeckAccess, Input UTCTime, P.TinyLog] r => + ( Members '[Error InternalError, FederatorAccess, GundeckAccess, Input UTCTime, P.TinyLog] r, + CallsFed 'Galley "on-conversation-created" + ) => Maybe UTCTime -> Local UserId -> Maybe ConnId -> @@ -553,14 +568,16 @@ notifyCreatedConversation dtime lusr conn c = do -- of being added to a conversation registerRemoteConversationMemberships now (tDomain lusr) c -- Notify local users - E.push =<< mapM (toPush now) (Data.convLocalMembers c) + let remoteOthers = map remoteMemberToOther $ Data.convRemoteMembers c + localOthers = map (localMemberToOther (tDomain lusr)) $ Data.convLocalMembers c + E.push =<< mapM (toPush now remoteOthers localOthers) (Data.convLocalMembers c) where route | Data.convType c == RegularConv = RouteAny | otherwise = RouteDirect - toPush t m = do + toPush t remoteOthers localOthers m = do let lconv = qualifyAs lusr (Data.convId c) - c' <- conversationView (qualifyAs lusr (lmId m)) c + c' <- conversationViewWithCachedOthers remoteOthers localOthers c (qualifyAs lusr (lmId m)) let e = Event (tUntagged lconv) Nothing (tUntagged lusr) t (EdConversation c') pure $ newPushLocal1 ListComplete (tUnqualified lusr) (ConvEvent e) (list1 (recipient m) []) diff --git a/services/galley/src/Galley/API/Error.hs b/services/galley/src/Galley/API/Error.hs index e43688e0c1..0beb260031 100644 --- a/services/galley/src/Galley/API/Error.hs +++ b/services/galley/src/Galley/API/Error.hs @@ -45,6 +45,7 @@ data InternalError | NoPrekeyForUser | CannotCreateManagedConv | InternalErrorWithDescription LText + deriving (Eq) internalErrorDescription :: InternalError -> LText internalErrorDescription = message . toWai diff --git a/services/galley/src/Galley/API/Federation.hs b/services/galley/src/Galley/API/Federation.hs index 8664eb988e..0572690474 100644 --- a/services/galley/src/Galley/API/Federation.hs +++ b/services/galley/src/Galley/API/Federation.hs @@ -100,23 +100,24 @@ import Wire.API.ServantProto type FederationAPI = "federation" :> FedApi 'Galley -- | Convert a polysemy handler to an 'API' value. -federationSitemap :: ServerT FederationAPI (Sem GalleyEffects) +federationSitemap :: + ServerT FederationAPI (Sem GalleyEffects) federationSitemap = Named @"on-conversation-created" onConversationCreated :<|> Named @"on-new-remote-conversation" onNewRemoteConversation :<|> Named @"get-conversations" getConversations :<|> Named @"on-conversation-updated" onConversationUpdated - :<|> Named @"leave-conversation" leaveConversation + :<|> Named @"leave-conversation" (callsFed leaveConversation) :<|> Named @"on-message-sent" onMessageSent - :<|> Named @"send-message" sendMessage - :<|> Named @"on-user-deleted-conversations" onUserDeleted - :<|> Named @"update-conversation" updateConversation + :<|> Named @"send-message" (callsFed sendMessage) + :<|> Named @"on-user-deleted-conversations" (callsFed onUserDeleted) + :<|> Named @"update-conversation" (callsFed updateConversation) :<|> Named @"mls-welcome" mlsSendWelcome :<|> Named @"on-mls-message-sent" onMLSMessageSent - :<|> Named @"send-mls-message" sendMLSMessage - :<|> Named @"send-mls-commit-bundle" sendMLSCommitBundle + :<|> Named @"send-mls-message" (callsFed sendMLSMessage) + :<|> Named @"send-mls-commit-bundle" (callsFed sendMLSCommitBundle) :<|> Named @"query-group-info" queryGroupInfo - :<|> Named @"on-client-removed" onClientRemoved + :<|> Named @"on-client-removed" (callsFed onClientRemoved) :<|> Named @"on-typing-indicator-updated" onTypingIndicatorUpdated onClientRemoved :: @@ -133,7 +134,8 @@ onClientRemoved :: ProposalStore, TinyLog ] - r + r, + CallsFed 'Galley "on-mls-message-sent" ) => Domain -> ClientRemovedRequest -> @@ -330,21 +332,25 @@ addLocalUsersToRemoteConv remoteConvId qAdder localUsers = do -- as of now this will not generate the necessary events on the leaver's domain leaveConversation :: - Members - '[ ConversationStore, - Error InternalError, - Error InvalidInput, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input Env, - Input (Local ()), - Input UTCTime, - MemberStore, - ProposalStore, - TinyLog - ] - r => + ( Members + '[ ConversationStore, + Error InternalError, + Error InvalidInput, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input Env, + Input (Local ()), + Input UTCTime, + MemberStore, + ProposalStore, + TinyLog + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => Domain -> F.LeaveConversationRequest -> Sem r F.LeaveConversationResponse @@ -433,22 +439,25 @@ onMessageSent domain rmUnqualified = do (Map.filterWithKey (\(uid, _) _ -> Set.member uid members) msgs) sendMessage :: - Members - '[ BrigAccess, - ClientStore, - ConversationStore, - Error InvalidInput, - FederatorAccess, - GundeckAccess, - Input (Local ()), - Input Opts, - Input UTCTime, - ExternalAccess, - MemberStore, - TeamStore, - P.TinyLog - ] - r => + ( Members + '[ BrigAccess, + ClientStore, + ConversationStore, + Error InvalidInput, + FederatorAccess, + GundeckAccess, + Input (Local ()), + Input Opts, + Input UTCTime, + ExternalAccess, + MemberStore, + TeamStore, + P.TinyLog + ] + r, + CallsFed 'Galley "on-message-sent", + CallsFed 'Brig "get-user-clients" + ) => Domain -> F.ProteusMessageSendRequest -> Sem r F.MessageSendResponse @@ -461,21 +470,25 @@ sendMessage originDomain msr = do throwErr = throw . InvalidPayload . LT.pack onUserDeleted :: - Members - '[ ConversationStore, - FederatorAccess, - FireAndForget, - ExternalAccess, - GundeckAccess, - Error InternalError, - Input (Local ()), - Input UTCTime, - Input Env, - MemberStore, - ProposalStore, - TinyLog - ] - r => + ( Members + '[ ConversationStore, + FederatorAccess, + FireAndForget, + ExternalAccess, + GundeckAccess, + Error InternalError, + Input (Local ()), + Input UTCTime, + Input Env, + MemberStore, + ProposalStore, + TinyLog + ] + r, + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-new-remote-conversation" + ) => Domain -> F.UserDeletedConversationsNotification -> Sem r EmptyResponse @@ -538,7 +551,10 @@ updateConversation :: ConversationStore, Input (Local ()) ] - r + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" ) => Domain -> F.ConversationUpdateRequest -> @@ -620,7 +636,13 @@ sendMLSCommitBundle :: P.TinyLog, ProposalStore ] - r + r, + CallsFed 'Galley "mls-welcome", + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation", + CallsFed 'Galley "send-mls-commit-bundle", + CallsFed 'Brig "get-mls-clients" ) => Domain -> F.MLSMessageSendRequest -> @@ -664,7 +686,12 @@ sendMLSMessage :: P.TinyLog, ProposalStore ] - r + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation", + CallsFed 'Galley "send-mls-message", + CallsFed 'Brig "get-mls-clients" ) => Domain -> F.MLSMessageSendRequest -> diff --git a/services/galley/src/Galley/API/Internal.hs b/services/galley/src/Galley/API/Internal.hs index 6b2e9c7177..972248483c 100644 --- a/services/galley/src/Galley/API/Internal.hs +++ b/services/galley/src/Galley/API/Internal.hs @@ -86,6 +86,7 @@ import Servant hiding (JSON, WithStatus) import qualified Servant hiding (WithStatus) import System.Logger.Class hiding (Path, name) import qualified System.Logger.Class as Log +import Wire.API.ApplyMods import Wire.API.Conversation hiding (Member) import Wire.API.Conversation.Action import Wire.API.Conversation.Role @@ -129,78 +130,86 @@ type LegalHoldFeatureStatusChangeErrors = ) ) +type LegalHoldFeaturesStatusChangeFederatedCalls = + '[ MakesFederatedCall 'Galley "on-conversation-updated", + MakesFederatedCall 'Galley "on-mls-message-sent", + MakesFederatedCall 'Galley "on-new-remote-conversation" + ] + type IFeatureAPI = -- SSOConfig IFeatureStatusGet SSOConfig - :<|> IFeatureStatusPut '() SSOConfig - :<|> IFeatureStatusPatch '() SSOConfig + :<|> IFeatureStatusPut '[] '() SSOConfig + :<|> IFeatureStatusPatch '[] '() SSOConfig -- LegalholdConfig :<|> IFeatureStatusGet LegalholdConfig :<|> IFeatureStatusPut + LegalHoldFeaturesStatusChangeFederatedCalls LegalHoldFeatureStatusChangeErrors LegalholdConfig :<|> IFeatureStatusPatch + LegalHoldFeaturesStatusChangeFederatedCalls LegalHoldFeatureStatusChangeErrors LegalholdConfig -- SearchVisibilityAvailableConfig :<|> IFeatureStatusGet SearchVisibilityAvailableConfig - :<|> IFeatureStatusPut '() SearchVisibilityAvailableConfig - :<|> IFeatureStatusPatch '() SearchVisibilityAvailableConfig + :<|> IFeatureStatusPut '[] '() SearchVisibilityAvailableConfig + :<|> IFeatureStatusPatch '[] '() SearchVisibilityAvailableConfig -- ValidateSAMLEmailsConfig :<|> IFeatureStatusGet ValidateSAMLEmailsConfig - :<|> IFeatureStatusPut '() ValidateSAMLEmailsConfig - :<|> IFeatureStatusPatch '() ValidateSAMLEmailsConfig + :<|> IFeatureStatusPut '[] '() ValidateSAMLEmailsConfig + :<|> IFeatureStatusPatch '[] '() ValidateSAMLEmailsConfig -- DigitalSignaturesConfig :<|> IFeatureStatusGet DigitalSignaturesConfig - :<|> IFeatureStatusPut '() DigitalSignaturesConfig - :<|> IFeatureStatusPatch '() DigitalSignaturesConfig + :<|> IFeatureStatusPut '[] '() DigitalSignaturesConfig + :<|> IFeatureStatusPatch '[] '() DigitalSignaturesConfig -- AppLockConfig :<|> IFeatureStatusGet AppLockConfig - :<|> IFeatureStatusPut '() AppLockConfig - :<|> IFeatureStatusPatch '() AppLockConfig + :<|> IFeatureStatusPut '[] '() AppLockConfig + :<|> IFeatureStatusPatch '[] '() AppLockConfig -- FileSharingConfig :<|> IFeatureStatusGet FileSharingConfig - :<|> IFeatureStatusPut '() FileSharingConfig + :<|> IFeatureStatusPut '[] '() FileSharingConfig :<|> IFeatureStatusLockStatusPut FileSharingConfig - :<|> IFeatureStatusPatch '() FileSharingConfig + :<|> IFeatureStatusPatch '[] '() FileSharingConfig -- ConferenceCallingConfig :<|> IFeatureStatusGet ConferenceCallingConfig - :<|> IFeatureStatusPut '() ConferenceCallingConfig - :<|> IFeatureStatusPatch '() ConferenceCallingConfig + :<|> IFeatureStatusPut '[] '() ConferenceCallingConfig + :<|> IFeatureStatusPatch '[] '() ConferenceCallingConfig -- SelfDeletingMessagesConfig :<|> IFeatureStatusGet SelfDeletingMessagesConfig - :<|> IFeatureStatusPut '() SelfDeletingMessagesConfig + :<|> IFeatureStatusPut '[] '() SelfDeletingMessagesConfig :<|> IFeatureStatusLockStatusPut SelfDeletingMessagesConfig - :<|> IFeatureStatusPatch '() SelfDeletingMessagesConfig + :<|> IFeatureStatusPatch '[] '() SelfDeletingMessagesConfig -- GuestLinksConfig :<|> IFeatureStatusGet GuestLinksConfig - :<|> IFeatureStatusPut '() GuestLinksConfig + :<|> IFeatureStatusPut '[] '() GuestLinksConfig :<|> IFeatureStatusLockStatusPut GuestLinksConfig - :<|> IFeatureStatusPatch '() GuestLinksConfig + :<|> IFeatureStatusPatch '[] '() GuestLinksConfig -- SndFactorPasswordChallengeConfig :<|> IFeatureStatusGet SndFactorPasswordChallengeConfig - :<|> IFeatureStatusPut '() SndFactorPasswordChallengeConfig + :<|> IFeatureStatusPut '[] '() SndFactorPasswordChallengeConfig :<|> IFeatureStatusLockStatusPut SndFactorPasswordChallengeConfig - :<|> IFeatureStatusPatch '() SndFactorPasswordChallengeConfig + :<|> IFeatureStatusPatch '[] '() SndFactorPasswordChallengeConfig -- SearchVisibilityInboundConfig :<|> IFeatureStatusGet SearchVisibilityInboundConfig - :<|> IFeatureStatusPut '() SearchVisibilityInboundConfig - :<|> IFeatureStatusPatch '() SearchVisibilityInboundConfig + :<|> IFeatureStatusPut '[] '() SearchVisibilityInboundConfig + :<|> IFeatureStatusPatch '[] '() SearchVisibilityInboundConfig :<|> IFeatureNoConfigMultiGet SearchVisibilityInboundConfig -- ClassifiedDomainsConfig :<|> IFeatureStatusGet ClassifiedDomainsConfig -- MLSConfig :<|> IFeatureStatusGet MLSConfig - :<|> IFeatureStatusPut '() MLSConfig - :<|> IFeatureStatusPatch '() MLSConfig + :<|> IFeatureStatusPut '[] '() MLSConfig + :<|> IFeatureStatusPatch '[] '() MLSConfig -- ExposeInvitationURLsToTeamAdminConfig :<|> IFeatureStatusGet ExposeInvitationURLsToTeamAdminConfig - :<|> IFeatureStatusPut '() ExposeInvitationURLsToTeamAdminConfig - :<|> IFeatureStatusPatch '() ExposeInvitationURLsToTeamAdminConfig + :<|> IFeatureStatusPut '[] '() ExposeInvitationURLsToTeamAdminConfig + :<|> IFeatureStatusPatch '[] '() ExposeInvitationURLsToTeamAdminConfig -- SearchVisibilityInboundConfig :<|> IFeatureStatusGet SearchVisibilityInboundConfig - :<|> IFeatureStatusPut '() SearchVisibilityInboundConfig - :<|> IFeatureStatusPatch '() SearchVisibilityInboundConfig + :<|> IFeatureStatusPut '[] '() SearchVisibilityInboundConfig + :<|> IFeatureStatusPatch '[] '() SearchVisibilityInboundConfig -- all feature configs :<|> Named "feature-configs-internal" @@ -232,6 +241,9 @@ type InternalAPIBase = "delete-user" ( Summary "Remove a user from their teams and conversations and erase their clients" + :> MakesFederatedCall 'Galley "on-conversation-updated" + :> MakesFederatedCall 'Galley "on-user-deleted-conversations" + :> MakesFederatedCall 'Galley "on-mls-message-sent" :> ZLocalUser :> ZOptConn :> "user" @@ -243,6 +255,7 @@ type InternalAPIBase = :<|> Named "connect" ( Summary "Create a connect conversation (deprecated)" + :> MakesFederatedCall 'Galley "on-conversation-created" :> CanThrow 'ConvNotFound :> CanThrow 'InvalidOperation :> CanThrow 'NotConnected @@ -393,9 +406,9 @@ type ITeamsAPIBase = type IFeatureStatusGet f = Named '("iget", f) (FeatureStatusBaseGet f) -type IFeatureStatusPut errs f = Named '("iput", f) (FeatureStatusBasePutInternal errs f) +type IFeatureStatusPut calls errs f = Named '("iput", f) (ApplyMods calls (FeatureStatusBasePutInternal errs f)) -type IFeatureStatusPatch errs f = Named '("ipatch", f) (FeatureStatusBasePatchInternal errs f) +type IFeatureStatusPatch calls errs f = Named '("ipatch", f) (ApplyMods calls (FeatureStatusBasePatchInternal errs f)) type FeatureStatusBasePutInternal errs featureConfig = FeatureStatusBaseInternal @@ -459,8 +472,8 @@ internalAPI :: API InternalAPI GalleyEffects internalAPI = hoistAPI @InternalAPIBase id $ mkNamedAPI @"status" (pure ()) - <@> mkNamedAPI @"delete-user" rmUser - <@> mkNamedAPI @"connect" Create.createConnectConversation + <@> mkNamedAPI @"delete-user" (callsFed rmUser) + <@> mkNamedAPI @"connect" (callsFed Create.createConnectConversation) <@> mkNamedAPI @"guard-legalhold-policy-conflicts" guardLegalholdPolicyConflictsH <@> legalholdWhitelistedTeamsAPI <@> iTeamsAPI @@ -511,8 +524,8 @@ featureAPI = <@> mkNamedAPI @'("iput", SSOConfig) (setFeatureStatusInternal @Cassandra) <@> mkNamedAPI @'("ipatch", SSOConfig) (patchFeatureStatusInternal @Cassandra) <@> mkNamedAPI @'("iget", LegalholdConfig) (getFeatureStatus @Cassandra DontDoAuth) - <@> mkNamedAPI @'("iput", LegalholdConfig) (setFeatureStatusInternal @Cassandra) - <@> mkNamedAPI @'("ipatch", LegalholdConfig) (patchFeatureStatusInternal @Cassandra) + <@> mkNamedAPI @'("iput", LegalholdConfig) (callsFed (setFeatureStatusInternal @Cassandra)) + <@> mkNamedAPI @'("ipatch", LegalholdConfig) (callsFed (patchFeatureStatusInternal @Cassandra)) <@> mkNamedAPI @'("iget", SearchVisibilityAvailableConfig) (getFeatureStatus @Cassandra DontDoAuth) <@> mkNamedAPI @'("iput", SearchVisibilityAvailableConfig) (setFeatureStatusInternal @Cassandra) <@> mkNamedAPI @'("ipatch", SearchVisibilityAvailableConfig) (patchFeatureStatusInternal @Cassandra) @@ -561,7 +574,7 @@ featureAPI = <@> mkNamedAPI @"feature-configs-internal" (maybe (getAllFeatureConfigsForServer @Cassandra) (getAllFeatureConfigsForUser @Cassandra)) internalSitemap :: Routes a (Sem GalleyEffects) () -internalSitemap = do +internalSitemap = unsafeCallsFed @'Galley @"on-client-removed" $ unsafeCallsFed @'Galley @"on-mls-message-sent" $ do -- Conversation API (internal) ---------------------------------------- put "/i/conversations/:cnv/channel" (continue $ const (pure empty)) $ zauthUserId @@ -668,7 +681,10 @@ rmUser :: P.TinyLog, TeamStore ] - r + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-user-deleted-conversations", + CallsFed 'Galley "on-mls-message-sent" ) => Local UserId -> Maybe ConnId -> diff --git a/services/galley/src/Galley/API/LegalHold.hs b/services/galley/src/Galley/API/LegalHold.hs index 6f595da9bf..cfb7e9cbae 100644 --- a/services/galley/src/Galley/API/LegalHold.hs +++ b/services/galley/src/Galley/API/LegalHold.hs @@ -72,6 +72,7 @@ import Wire.API.Conversation (ConvType (..)) import Wire.API.Conversation.Role import Wire.API.Error import Wire.API.Error.Galley +import Wire.API.Federation.API import Wire.API.Provider.Service import Wire.API.Routes.Internal.Brig.Connection import Wire.API.Routes.Public.Galley.LegalHold @@ -184,40 +185,44 @@ getSettings lzusr tid = do removeSettingsInternalPaging :: forall db r. - Members - '[ BotAccess, - BrigAccess, - CodeStore, - ConversationStore, - Error AuthenticationError, - Error InternalError, - ErrorS ('ActionDenied 'RemoveConversationMember), - ErrorS 'InvalidOperation, - ErrorS 'LegalHoldCouldNotBlockConnections, - ErrorS 'LegalHoldDisableUnimplemented, - ErrorS 'LegalHoldNotEnabled, - ErrorS 'LegalHoldServiceNotRegistered, - ErrorS 'NotATeamMember, - ErrorS OperationDenied, - ErrorS 'UserLegalHoldIllegalOperation, - ExternalAccess, - FederatorAccess, - FireAndForget, - GundeckAccess, - Input Env, - Input (Local ()), - Input UTCTime, - LegalHoldStore, - ListItems LegacyPaging ConvId, - MemberStore, - ProposalStore, - P.TinyLog, - TeamFeatureStore db, - TeamMemberStore InternalPaging, - TeamStore, - WaiRoutes - ] - r => + ( Members + '[ BotAccess, + BrigAccess, + CodeStore, + ConversationStore, + Error AuthenticationError, + Error InternalError, + ErrorS ('ActionDenied 'RemoveConversationMember), + ErrorS 'InvalidOperation, + ErrorS 'LegalHoldCouldNotBlockConnections, + ErrorS 'LegalHoldDisableUnimplemented, + ErrorS 'LegalHoldNotEnabled, + ErrorS 'LegalHoldServiceNotRegistered, + ErrorS 'NotATeamMember, + ErrorS OperationDenied, + ErrorS 'UserLegalHoldIllegalOperation, + ExternalAccess, + FederatorAccess, + FireAndForget, + GundeckAccess, + Input Env, + Input (Local ()), + Input UTCTime, + LegalHoldStore, + ListItems LegacyPaging ConvId, + MemberStore, + ProposalStore, + P.TinyLog, + TeamFeatureStore db, + TeamMemberStore InternalPaging, + TeamStore, + WaiRoutes + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => TeamFeatures.FeaturePersistentConstraint db Public.LegalholdConfig => Local UserId -> TeamId -> @@ -261,7 +266,10 @@ removeSettings :: TeamMemberStore p, TeamStore ] - r + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" ) => TeamFeatures.FeaturePersistentConstraint db Public.LegalholdConfig => UserId -> @@ -320,7 +328,10 @@ removeSettings' :: ProposalStore, P.TinyLog ] - r + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" ) => TeamId -> Sem r () @@ -388,28 +399,32 @@ getUserStatus _lzusr tid uid = do -- @withdrawExplicitConsentH@ (lots of corner cases we'd have to implement for that to pan -- out). grantConsent :: - Members - '[ BrigAccess, - ConversationStore, - Error InternalError, - ErrorS ('ActionDenied 'RemoveConversationMember), - ErrorS 'InvalidOperation, - ErrorS 'LegalHoldCouldNotBlockConnections, - ErrorS 'TeamMemberNotFound, - ErrorS 'UserLegalHoldIllegalOperation, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input Env, - Input UTCTime, - LegalHoldStore, - ListItems LegacyPaging ConvId, - MemberStore, - ProposalStore, - P.TinyLog, - TeamStore - ] - r => + ( Members + '[ BrigAccess, + ConversationStore, + Error InternalError, + ErrorS ('ActionDenied 'RemoveConversationMember), + ErrorS 'InvalidOperation, + ErrorS 'LegalHoldCouldNotBlockConnections, + ErrorS 'TeamMemberNotFound, + ErrorS 'UserLegalHoldIllegalOperation, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input Env, + Input UTCTime, + LegalHoldStore, + ListItems LegacyPaging ConvId, + MemberStore, + ProposalStore, + P.TinyLog, + TeamStore + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => Local UserId -> TeamId -> Sem r GrantConsentResult @@ -427,36 +442,40 @@ grantConsent lusr tid = do -- | Request to provision a device on the legal hold service for a user requestDevice :: forall db r. - Members - '[ BrigAccess, - ConversationStore, - Error InternalError, - ErrorS ('ActionDenied 'RemoveConversationMember), - ErrorS 'LegalHoldCouldNotBlockConnections, - ErrorS 'LegalHoldNotEnabled, - ErrorS 'LegalHoldServiceBadResponse, - ErrorS 'LegalHoldServiceNotRegistered, - ErrorS 'NotATeamMember, - ErrorS 'NoUserLegalHoldConsent, - ErrorS OperationDenied, - ErrorS 'TeamMemberNotFound, - ErrorS 'UserLegalHoldAlreadyEnabled, - ErrorS 'UserLegalHoldIllegalOperation, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input (Local ()), - Input Env, - Input UTCTime, - LegalHoldStore, - ListItems LegacyPaging ConvId, - MemberStore, - ProposalStore, - P.TinyLog, - TeamFeatureStore db, - TeamStore - ] - r => + ( Members + '[ BrigAccess, + ConversationStore, + Error InternalError, + ErrorS ('ActionDenied 'RemoveConversationMember), + ErrorS 'LegalHoldCouldNotBlockConnections, + ErrorS 'LegalHoldNotEnabled, + ErrorS 'LegalHoldServiceBadResponse, + ErrorS 'LegalHoldServiceNotRegistered, + ErrorS 'NotATeamMember, + ErrorS 'NoUserLegalHoldConsent, + ErrorS OperationDenied, + ErrorS 'TeamMemberNotFound, + ErrorS 'UserLegalHoldAlreadyEnabled, + ErrorS 'UserLegalHoldIllegalOperation, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input (Local ()), + Input Env, + Input UTCTime, + LegalHoldStore, + ListItems LegacyPaging ConvId, + MemberStore, + ProposalStore, + P.TinyLog, + TeamFeatureStore db, + TeamStore + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => TeamFeatures.FeaturePersistentConstraint db Public.LegalholdConfig => Local UserId -> TeamId -> @@ -507,36 +526,40 @@ requestDevice lzusr tid uid = do -- since they are replaced if needed when registering new LH devices. approveDevice :: forall db r. - Members - '[ BrigAccess, - ConversationStore, - Error AuthenticationError, - Error InternalError, - ErrorS 'AccessDenied, - ErrorS ('ActionDenied 'RemoveConversationMember), - ErrorS 'LegalHoldCouldNotBlockConnections, - ErrorS 'LegalHoldNotEnabled, - ErrorS 'LegalHoldServiceNotRegistered, - ErrorS 'NoLegalHoldDeviceAllocated, - ErrorS 'NotATeamMember, - ErrorS 'UserLegalHoldAlreadyEnabled, - ErrorS 'UserLegalHoldIllegalOperation, - ErrorS 'UserLegalHoldNotPending, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input (Local ()), - Input Env, - Input UTCTime, - LegalHoldStore, - ListItems LegacyPaging ConvId, - MemberStore, - ProposalStore, - P.TinyLog, - TeamFeatureStore db, - TeamStore - ] - r => + ( Members + '[ BrigAccess, + ConversationStore, + Error AuthenticationError, + Error InternalError, + ErrorS 'AccessDenied, + ErrorS ('ActionDenied 'RemoveConversationMember), + ErrorS 'LegalHoldCouldNotBlockConnections, + ErrorS 'LegalHoldNotEnabled, + ErrorS 'LegalHoldServiceNotRegistered, + ErrorS 'NoLegalHoldDeviceAllocated, + ErrorS 'NotATeamMember, + ErrorS 'UserLegalHoldAlreadyEnabled, + ErrorS 'UserLegalHoldIllegalOperation, + ErrorS 'UserLegalHoldNotPending, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input (Local ()), + Input Env, + Input UTCTime, + LegalHoldStore, + ListItems LegacyPaging ConvId, + MemberStore, + ProposalStore, + P.TinyLog, + TeamFeatureStore db, + TeamStore + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => TeamFeatures.FeaturePersistentConstraint db Public.LegalholdConfig => Local UserId -> ConnId -> @@ -588,31 +611,35 @@ approveDevice lzusr connId tid uid (Public.ApproveLegalHoldForUserRequest mPassw disableForUser :: forall r. - Members - '[ BrigAccess, - ConversationStore, - Error AuthenticationError, - Error InternalError, - ErrorS ('ActionDenied 'RemoveConversationMember), - ErrorS 'LegalHoldCouldNotBlockConnections, - ErrorS 'LegalHoldServiceNotRegistered, - ErrorS 'NotATeamMember, - ErrorS OperationDenied, - ErrorS 'UserLegalHoldIllegalOperation, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input Env, - Input (Local ()), - Input UTCTime, - LegalHoldStore, - ListItems LegacyPaging ConvId, - MemberStore, - ProposalStore, - P.TinyLog, - TeamStore - ] - r => + ( Members + '[ BrigAccess, + ConversationStore, + Error AuthenticationError, + Error InternalError, + ErrorS ('ActionDenied 'RemoveConversationMember), + ErrorS 'LegalHoldCouldNotBlockConnections, + ErrorS 'LegalHoldServiceNotRegistered, + ErrorS 'NotATeamMember, + ErrorS OperationDenied, + ErrorS 'UserLegalHoldIllegalOperation, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input Env, + Input (Local ()), + Input UTCTime, + LegalHoldStore, + ListItems LegacyPaging ConvId, + MemberStore, + ProposalStore, + P.TinyLog, + TeamStore + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => Local UserId -> TeamId -> UserId -> @@ -646,26 +673,30 @@ disableForUser lzusr tid uid (Public.DisableLegalHoldForUserRequest mPassword) = -- or disabled, make sure the affected connections are screened for policy conflict (anybody -- with no-consent), and put those connections in the appropriate blocked state. changeLegalholdStatus :: - Members - '[ BrigAccess, - ConversationStore, - Error InternalError, - ErrorS ('ActionDenied 'RemoveConversationMember), - ErrorS 'LegalHoldCouldNotBlockConnections, - ErrorS 'UserLegalHoldIllegalOperation, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input Env, - Input UTCTime, - LegalHoldStore, - ListItems LegacyPaging ConvId, - MemberStore, - TeamStore, - ProposalStore, - P.TinyLog - ] - r => + ( Members + '[ BrigAccess, + ConversationStore, + Error InternalError, + ErrorS ('ActionDenied 'RemoveConversationMember), + ErrorS 'LegalHoldCouldNotBlockConnections, + ErrorS 'UserLegalHoldIllegalOperation, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input Env, + Input UTCTime, + LegalHoldStore, + ListItems LegacyPaging ConvId, + MemberStore, + TeamStore, + ProposalStore, + P.TinyLog + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => TeamId -> Local UserId -> UserLegalHoldStatus -> @@ -766,22 +797,26 @@ unsetTeamLegalholdWhitelistedH tid = do -- contains the hypothetical new LH status of `uid`'s so it can be consulted instead of the -- one from the database. handleGroupConvPolicyConflicts :: - Members - '[ ConversationStore, - Error InternalError, - ErrorS ('ActionDenied 'RemoveConversationMember), - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input Env, - Input UTCTime, - ListItems LegacyPaging ConvId, - MemberStore, - ProposalStore, - P.TinyLog, - TeamStore - ] - r => + ( Members + '[ ConversationStore, + Error InternalError, + ErrorS ('ActionDenied 'RemoveConversationMember), + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input Env, + Input UTCTime, + ListItems LegacyPaging ConvId, + MemberStore, + ProposalStore, + P.TinyLog, + TeamStore + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => Local UserId -> UserLegalHoldStatus -> Sem r () diff --git a/services/galley/src/Galley/API/MLS/GroupInfo.hs b/services/galley/src/Galley/API/MLS/GroupInfo.hs index ea2b16c78d..46a6f530f7 100644 --- a/services/galley/src/Galley/API/MLS/GroupInfo.hs +++ b/services/galley/src/Galley/API/MLS/GroupInfo.hs @@ -45,14 +45,16 @@ type MLSGroupInfoStaticErrors = ] getGroupInfo :: - Members - '[ ConversationStore, - Error FederationError, - FederatorAccess, - Input Env, - MemberStore - ] - r => + ( Members + '[ ConversationStore, + Error FederationError, + FederatorAccess, + Input Env, + MemberStore + ] + r, + CallsFed 'Galley "query-group-info" + ) => Members MLSGroupInfoStaticErrors r => Local UserId -> Qualified ConvId -> @@ -81,7 +83,7 @@ getGroupInfoFromLocalConv qusr lcnvId = do >>= noteS @'MLSMissingGroupInfo getGroupInfoFromRemoteConv :: - Members '[Error FederationError, FederatorAccess] r => + (Members '[Error FederationError, FederatorAccess] r, CallsFed 'Galley "query-group-info") => Members MLSGroupInfoStaticErrors r => Local UserId -> Remote ConvId -> diff --git a/services/galley/src/Galley/API/MLS/Message.hs b/services/galley/src/Galley/API/MLS/Message.hs index 16fb3e71a4..b939e3ab08 100644 --- a/services/galley/src/Galley/API/MLS/Message.hs +++ b/services/galley/src/Galley/API/MLS/Message.hs @@ -140,7 +140,12 @@ postMLSMessageFromLocalUserV1 :: Resource, TinyLog ] - r + r, + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "send-mls-message", + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-new-remote-conversation", + CallsFed 'Brig "get-mls-clients" ) => Local UserId -> Maybe ClientId -> @@ -178,7 +183,12 @@ postMLSMessageFromLocalUser :: Resource, TinyLog ] - r + r, + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "send-mls-message", + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-new-remote-conversation", + CallsFed 'Brig "get-mls-clients" ) => Local UserId -> Maybe ClientId -> @@ -209,7 +219,13 @@ postMLSCommitBundle :: Resource, TinyLog ] - r + r, + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "mls-welcome", + CallsFed 'Galley "send-mls-commit-bundle", + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-new-remote-conversation", + CallsFed 'Brig "get-mls-clients" ) => Local x -> Qualified UserId -> @@ -241,7 +257,13 @@ postMLSCommitBundleFromLocalUser :: Resource, TinyLog ] - r + r, + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "mls-welcome", + CallsFed 'Galley "send-mls-commit-bundle", + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-new-remote-conversation", + CallsFed 'Brig "get-mls-clients" ) => Local UserId -> Maybe ClientId -> @@ -272,7 +294,12 @@ postMLSCommitBundleToLocalConv :: Resource, TinyLog ] - r + r, + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "mls-welcome", + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-new-remote-conversation", + CallsFed 'Brig "get-mls-clients" ) => Qualified UserId -> Maybe ClientId -> @@ -337,7 +364,8 @@ postMLSCommitBundleToRemoteConv :: MemberStore, TinyLog ] - r + r, + CallsFed 'Galley "send-mls-commit-bundle" ) => Local x -> Qualified UserId -> @@ -392,7 +420,12 @@ postMLSMessage :: Resource, TinyLog ] - r + r, + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "send-mls-message", + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-new-remote-conversation", + CallsFed 'Brig "get-mls-clients" ) => Local x -> Qualified UserId -> @@ -478,7 +511,11 @@ postMLSMessageToLocalConv :: Resource, TinyLog ] - r + r, + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-new-remote-conversation", + CallsFed 'Brig "get-mls-clients" ) => Qualified UserId -> Maybe ClientId -> @@ -517,7 +554,8 @@ postMLSMessageToLocalConv qusr senderClient con smsg lcnv = case rmValue smsg of postMLSMessageToRemoteConv :: ( Members MLSMessageStaticErrors r, Members '[Error FederationError, TinyLog] r, - HasProposalEffects r + HasProposalEffects r, + CallsFed 'Galley "send-mls-message" ) => Local x -> Qualified UserId -> @@ -642,7 +680,11 @@ processCommit :: Member (Input (Local ())) r, Member ProposalStore r, Member BrigAccess r, - Member Resource r + Member Resource r, + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-new-remote-conversation", + CallsFed 'Brig "get-mls-clients" ) => Qualified UserId -> Maybe ClientId -> @@ -660,26 +702,28 @@ processCommit qusr senderClient con lconv mlsMeta cm epoch sender commit = do processExternalCommit :: forall r. - Members - '[ BrigAccess, - ConversationStore, - Error MLSProtocolError, - ErrorS 'ConvNotFound, - ErrorS 'MLSClientSenderUserMismatch, - ErrorS 'MLSKeyPackageRefNotFound, - ErrorS 'MLSStaleMessage, - ErrorS 'MLSMissingSenderClient, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input Env, - Input UTCTime, - MemberStore, - ProposalStore, - Resource, - TinyLog - ] - r => + ( Members + '[ BrigAccess, + ConversationStore, + Error MLSProtocolError, + ErrorS 'ConvNotFound, + ErrorS 'MLSClientSenderUserMismatch, + ErrorS 'MLSKeyPackageRefNotFound, + ErrorS 'MLSStaleMessage, + ErrorS 'MLSMissingSenderClient, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input Env, + Input UTCTime, + MemberStore, + ProposalStore, + Resource, + TinyLog + ] + r, + CallsFed 'Galley "on-mls-message-sent" + ) => Qualified UserId -> Maybe ClientId -> Local Data.Conversation -> @@ -784,7 +828,11 @@ processCommitWithAction :: Member (Input (Local ())) r, Member ProposalStore r, Member BrigAccess r, - Member Resource r + Member Resource r, + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-new-remote-conversation", + CallsFed 'Brig "get-mls-clients" ) => Qualified UserId -> Maybe ClientId -> @@ -819,7 +867,11 @@ processInternalCommit :: Member (Input (Local ())) r, Member ProposalStore r, Member BrigAccess r, - Member Resource r + Member Resource r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation", + CallsFed 'Brig "get-mls-clients" ) => Qualified UserId -> Maybe ClientId -> @@ -1153,7 +1205,11 @@ executeProposalAction :: Member MemberStore r, Member ProposalStore r, Member TeamStore r, - Member TinyLog r + Member TinyLog r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation", + CallsFed 'Brig "get-mls-clients" ) => Qualified UserId -> Maybe ConnId -> @@ -1292,7 +1348,9 @@ handleNoChanges :: Monoid a => Sem (Error NoChanges ': r) a -> Sem r a handleNoChanges = fmap fold . runError getClientInfo :: - Members '[BrigAccess, FederatorAccess] r => + ( Members '[BrigAccess, FederatorAccess] r, + CallsFed 'Brig "get-mls-clients" + ) => Local x -> Qualified UserId -> SignatureSchemeTag -> @@ -1300,7 +1358,9 @@ getClientInfo :: getClientInfo loc = foldQualified loc getLocalMLSClients getRemoteMLSClients getRemoteMLSClients :: - Member FederatorAccess r => + ( Member FederatorAccess r, + CallsFed 'Brig "get-mls-clients" + ) => Remote UserId -> SignatureSchemeTag -> Sem r (Set ClientInfo) diff --git a/services/galley/src/Galley/API/MLS/Propagate.hs b/services/galley/src/Galley/API/MLS/Propagate.hs index 74fbf8f608..22ca2d9d5e 100644 --- a/services/galley/src/Galley/API/MLS/Propagate.hs +++ b/services/galley/src/Galley/API/MLS/Propagate.hs @@ -52,7 +52,8 @@ propagateMessage :: Member FederatorAccess r, Member GundeckAccess r, Member (Input UTCTime) r, - Member TinyLog r + Member TinyLog r, + CallsFed 'Galley "on-mls-message-sent" ) => Qualified UserId -> Local Data.Conversation -> diff --git a/services/galley/src/Galley/API/MLS/Removal.hs b/services/galley/src/Galley/API/MLS/Removal.hs index f16edf2bd2..d06971da27 100644 --- a/services/galley/src/Galley/API/MLS/Removal.hs +++ b/services/galley/src/Galley/API/MLS/Removal.hs @@ -44,6 +44,7 @@ import Polysemy.Input import Polysemy.TinyLog import qualified System.Logger as Log import Wire.API.Conversation.Protocol +import Wire.API.Federation.API import Wire.API.MLS.KeyPackage import Wire.API.MLS.Message import Wire.API.MLS.Proposal @@ -61,7 +62,8 @@ removeClientsWithClientMap :: Input Env ] r, - Traversable t + Traversable t, + CallsFed 'Galley "on-mls-message-sent" ) => Local Data.Conversation -> t KeyPackageRef -> @@ -102,7 +104,8 @@ removeClient :: ProposalStore, TinyLog ] - r + r, + CallsFed 'Galley "on-mls-message-sent" ) => Local Data.Conversation -> Qualified UserId -> @@ -128,7 +131,8 @@ removeUserWithClientMap :: ProposalStore, Input Env ] - r + r, + CallsFed 'Galley "on-mls-message-sent" ) => Local Data.Conversation -> ClientMap -> @@ -150,7 +154,8 @@ removeUser :: ProposalStore, TinyLog ] - r + r, + CallsFed 'Galley "on-mls-message-sent" ) => Local Data.Conversation -> Qualified UserId -> diff --git a/services/galley/src/Galley/API/MLS/Welcome.hs b/services/galley/src/Galley/API/MLS/Welcome.hs index 7e508e8f98..84c67b31f9 100644 --- a/services/galley/src/Galley/API/MLS/Welcome.hs +++ b/services/galley/src/Galley/API/MLS/Welcome.hs @@ -55,15 +55,17 @@ import Wire.API.MLS.Welcome import Wire.API.Message postMLSWelcome :: - Members - '[ BrigAccess, - FederatorAccess, - GundeckAccess, - ErrorS 'MLSKeyPackageRefNotFound, - Input UTCTime, - P.TinyLog - ] - r => + ( Members + '[ BrigAccess, + FederatorAccess, + GundeckAccess, + ErrorS 'MLSKeyPackageRefNotFound, + Input UTCTime, + P.TinyLog + ] + r, + CallsFed 'Galley "mls-welcome" + ) => Local x -> Maybe ConnId -> RawMLS Welcome -> @@ -76,17 +78,19 @@ postMLSWelcome loc con wel = do sendRemoteWelcomes (rmRaw wel) remotes postMLSWelcomeFromLocalUser :: - Members - '[ BrigAccess, - FederatorAccess, - GundeckAccess, - ErrorS 'MLSKeyPackageRefNotFound, - ErrorS 'MLSNotEnabled, - Input UTCTime, - Input Env, - P.TinyLog - ] - r => + ( Members + '[ BrigAccess, + FederatorAccess, + GundeckAccess, + ErrorS 'MLSKeyPackageRefNotFound, + ErrorS 'MLSNotEnabled, + Input UTCTime, + Input Env, + P.TinyLog + ] + r, + CallsFed 'Galley "mls-welcome" + ) => Local x -> ConnId -> RawMLS Welcome -> @@ -131,11 +135,13 @@ sendLocalWelcomes con now rawWelcome lclients = do in newMessagePush lclients mempty con defMessageMetadata (u, c) e sendRemoteWelcomes :: - Members - '[ FederatorAccess, - P.TinyLog - ] - r => + ( Members + '[ FederatorAccess, + P.TinyLog + ] + r, + CallsFed 'Galley "mls-welcome" + ) => ByteString -> [Remote (UserId, ClientId)] -> Sem r () diff --git a/services/galley/src/Galley/API/Mapping.hs b/services/galley/src/Galley/API/Mapping.hs index 530d421eb4..d6bd3d4392 100644 --- a/services/galley/src/Galley/API/Mapping.hs +++ b/services/galley/src/Galley/API/Mapping.hs @@ -17,7 +17,7 @@ module Galley.API.Mapping ( conversationView, - conversationViewMaybe, + conversationViewWithCachedOthers, remoteConversationView, conversationToRemote, localMemberToSelf, @@ -42,14 +42,29 @@ import Wire.API.Federation.API.Galley -- | View for a given user of a stored conversation. -- --- Throws "bad-state" when the user is not part of the conversation. +-- Throws @BadMemberState@ when the user is not part of the conversation. conversationView :: Members '[Error InternalError, P.TinyLog] r => Local UserId -> Data.Conversation -> Sem r Conversation conversationView luid conv = do - let mbConv = conversationViewMaybe luid conv + let remoteOthers = map remoteMemberToOther $ Data.convRemoteMembers conv + localOthers = map (localMemberToOther (tDomain luid)) $ Data.convLocalMembers conv + conversationViewWithCachedOthers remoteOthers localOthers conv luid + +-- | Like 'conversationView' but optimized for situations which could benefit +-- from pre-computing the list of @OtherMember@s in the conversation. For +-- instance, creating @ConvesationView@ for more than 1 member of the same conversation. +conversationViewWithCachedOthers :: + Members '[Error InternalError, P.TinyLog] r => + [OtherMember] -> + [OtherMember] -> + Data.Conversation -> + Local UserId -> + Sem r Conversation +conversationViewWithCachedOthers remoteOthers localOthers conv luid = do + let mbConv = conversationViewMaybe luid remoteOthers localOthers conv maybe memberNotFound pure mbConv where memberNotFound = do @@ -63,14 +78,11 @@ conversationView luid conv = do -- | View for a given user of a stored conversation. -- -- Returns 'Nothing' if the user is not part of the conversation. -conversationViewMaybe :: Local UserId -> Data.Conversation -> Maybe Conversation -conversationViewMaybe luid conv = do - let (selfs, lothers) = partition ((tUnqualified luid ==) . lmId) (Data.convLocalMembers conv) - rothers = Data.convRemoteMembers conv +conversationViewMaybe :: Local UserId -> [OtherMember] -> [OtherMember] -> Data.Conversation -> Maybe Conversation +conversationViewMaybe luid remoteOthers localOthers conv = do + let selfs = filter ((tUnqualified luid ==) . lmId) (Data.convLocalMembers conv) self <- localMemberToSelf luid <$> listToMaybe selfs - let others = - map (localMemberToOther (tDomain luid)) lothers - <> map remoteMemberToOther rothers + let others = filter (\oth -> tUntagged luid /= omQualifiedId oth) localOthers <> remoteOthers pure $ Conversation (tUntagged . qualifyAs luid . convId $ conv) diff --git a/services/galley/src/Galley/API/Message.hs b/services/galley/src/Galley/API/Message.hs index 94d6809ce5..8f0f2bb178 100644 --- a/services/galley/src/Galley/API/Message.hs +++ b/services/galley/src/Galley/API/Message.hs @@ -214,7 +214,7 @@ checkMessageClients sender participantMap recipientMap mismatchStrat = ) getRemoteClients :: - Member FederatorAccess r => + (Member FederatorAccess r, CallsFed 'Brig "get-user-clients") => [RemoteMember] -> Sem r (Map (Domain, UserId) (Set ClientId)) getRemoteClients remoteMembers = @@ -228,7 +228,7 @@ getRemoteClients remoteMembers = -- FUTUREWORK: sender should be Local UserId postRemoteOtrMessage :: - Members '[FederatorAccess] r => + (Members '[FederatorAccess] r, CallsFed 'Galley "send-message") => Qualified UserId -> Remote ConvId -> ByteString -> @@ -357,21 +357,24 @@ postBroadcast lusr con msg = runError $ do pure (mems ^. teamMembers) postQualifiedOtrMessage :: - Members - '[ BrigAccess, - ClientStore, - ConversationStore, - FederatorAccess, - GundeckAccess, - ExternalAccess, - Input (Local ()), -- FUTUREWORK: remove this - Input Opts, - Input UTCTime, - MemberStore, - TeamStore, - P.TinyLog - ] - r => + ( Members + '[ BrigAccess, + ClientStore, + ConversationStore, + FederatorAccess, + GundeckAccess, + ExternalAccess, + Input (Local ()), -- FUTUREWORK: remove this + Input Opts, + Input UTCTime, + MemberStore, + TeamStore, + P.TinyLog + ] + r, + CallsFed 'Galley "on-message-sent", + CallsFed 'Brig "get-user-clients" + ) => UserType -> Qualified UserId -> Maybe ConnId -> @@ -473,7 +476,8 @@ makeUserMap keys = (<> Map.fromSet (const mempty) keys) sendMessages :: forall t r. ( t ~ 'NormalMessage, - Members '[GundeckAccess, ExternalAccess, FederatorAccess, P.TinyLog] r + Members '[GundeckAccess, ExternalAccess, FederatorAccess, P.TinyLog] r, + CallsFed 'Galley "on-message-sent" ) => UTCTime -> Qualified UserId -> @@ -551,7 +555,7 @@ sendLocalMessages loc now sender senderClient mconn qcnv botMap metadata localMe sendRemoteMessages :: forall r x. - Members '[FederatorAccess, P.TinyLog] r => + (Members '[FederatorAccess, P.TinyLog] r, CallsFed 'Galley "on-message-sent") => Remote x -> UTCTime -> Qualified UserId -> diff --git a/services/galley/src/Galley/API/Public/Bot.hs b/services/galley/src/Galley/API/Public/Bot.hs index 8c75ddbdee..06ea1f89fa 100644 --- a/services/galley/src/Galley/API/Public/Bot.hs +++ b/services/galley/src/Galley/API/Public/Bot.hs @@ -19,8 +19,9 @@ module Galley.API.Public.Bot where import Galley.API.Update import Galley.App +import Wire.API.Federation.API import Wire.API.Routes.API import Wire.API.Routes.Public.Galley.Bot botAPI :: API BotAPI GalleyEffects -botAPI = mkNamedAPI @"post-bot-message-unqualified" postBotMessageUnqualified +botAPI = mkNamedAPI @"post-bot-message-unqualified" (callsFed (callsFed postBotMessageUnqualified)) diff --git a/services/galley/src/Galley/API/Public/Conversation.hs b/services/galley/src/Galley/API/Public/Conversation.hs index f14ee73397..c080d83b04 100644 --- a/services/galley/src/Galley/API/Public/Conversation.hs +++ b/services/galley/src/Galley/API/Public/Conversation.hs @@ -24,6 +24,7 @@ import Galley.API.Query import Galley.API.Update import Galley.App import Galley.Cassandra.TeamFeatures +import Wire.API.Federation.API import Wire.API.Routes.API import Wire.API.Routes.Public.Galley.Conversation @@ -31,50 +32,50 @@ conversationAPI :: API ConversationAPI GalleyEffects conversationAPI = mkNamedAPI @"get-unqualified-conversation" getUnqualifiedConversation <@> mkNamedAPI @"get-unqualified-conversation-legalhold-alias" getUnqualifiedConversation - <@> mkNamedAPI @"get-conversation" getConversation + <@> mkNamedAPI @"get-conversation" (callsFed getConversation) <@> mkNamedAPI @"get-conversation-roles" getConversationRoles - <@> mkNamedAPI @"get-group-info" getGroupInfo + <@> mkNamedAPI @"get-group-info" (callsFed getGroupInfo) <@> mkNamedAPI @"list-conversation-ids-unqualified" conversationIdsPageFromUnqualified <@> mkNamedAPI @"list-conversation-ids-v2" (conversationIdsPageFromV2 DoNotListGlobalSelf) <@> mkNamedAPI @"list-conversation-ids" conversationIdsPageFrom <@> mkNamedAPI @"get-conversations" getConversations - <@> mkNamedAPI @"list-conversations@v1" listConversations - <@> mkNamedAPI @"list-conversations@v2" listConversations - <@> mkNamedAPI @"list-conversations" listConversations + <@> mkNamedAPI @"list-conversations@v1" (callsFed listConversations) + <@> mkNamedAPI @"list-conversations@v2" (callsFed listConversations) + <@> mkNamedAPI @"list-conversations" (callsFed listConversations) <@> mkNamedAPI @"get-conversation-by-reusable-code" (getConversationByReusableCode @Cassandra) - <@> mkNamedAPI @"create-group-conversation@v2" createGroupConversation - <@> mkNamedAPI @"create-group-conversation" createGroupConversation + <@> mkNamedAPI @"create-group-conversation@v2" (callsFed createGroupConversation) + <@> mkNamedAPI @"create-group-conversation" (callsFed createGroupConversation) <@> mkNamedAPI @"create-self-conversation@v2" createProteusSelfConversation <@> mkNamedAPI @"create-self-conversation" createProteusSelfConversation <@> mkNamedAPI @"get-mls-self-conversation" getMLSSelfConversationWithError - <@> mkNamedAPI @"create-one-to-one-conversation@v2" createOne2OneConversation - <@> mkNamedAPI @"create-one-to-one-conversation" createOne2OneConversation - <@> mkNamedAPI @"add-members-to-conversation-unqualified" addMembersUnqualified - <@> mkNamedAPI @"add-members-to-conversation-unqualified2" addMembersUnqualifiedV2 - <@> mkNamedAPI @"add-members-to-conversation" addMembers - <@> mkNamedAPI @"join-conversation-by-id-unqualified" (joinConversationById @Cassandra) - <@> mkNamedAPI @"join-conversation-by-code-unqualified" (joinConversationByReusableCode @Cassandra) + <@> mkNamedAPI @"create-one-to-one-conversation@v2" (callsFed createOne2OneConversation) + <@> mkNamedAPI @"create-one-to-one-conversation" (callsFed createOne2OneConversation) + <@> mkNamedAPI @"add-members-to-conversation-unqualified" (callsFed addMembersUnqualified) + <@> mkNamedAPI @"add-members-to-conversation-unqualified2" (callsFed addMembersUnqualifiedV2) + <@> mkNamedAPI @"add-members-to-conversation" (callsFed addMembers) + <@> mkNamedAPI @"join-conversation-by-id-unqualified" (callsFed (joinConversationById @Cassandra)) + <@> mkNamedAPI @"join-conversation-by-code-unqualified" (callsFed (joinConversationByReusableCode @Cassandra)) <@> mkNamedAPI @"code-check" (checkReusableCode @Cassandra) <@> mkNamedAPI @"create-conversation-code-unqualified" (addCodeUnqualified @Cassandra) <@> mkNamedAPI @"get-conversation-guest-links-status" (getConversationGuestLinksStatus @Cassandra) <@> mkNamedAPI @"remove-code-unqualified" rmCodeUnqualified <@> mkNamedAPI @"get-code" (getCode @Cassandra) <@> mkNamedAPI @"member-typing-unqualified" isTypingUnqualified - <@> mkNamedAPI @"member-typing-qualified" isTypingQualified - <@> mkNamedAPI @"remove-member-unqualified" removeMemberUnqualified - <@> mkNamedAPI @"remove-member" removeMemberQualified - <@> mkNamedAPI @"update-other-member-unqualified" updateOtherMemberUnqualified - <@> mkNamedAPI @"update-other-member" updateOtherMember - <@> mkNamedAPI @"update-conversation-name-deprecated" updateUnqualifiedConversationName - <@> mkNamedAPI @"update-conversation-name-unqualified" updateUnqualifiedConversationName - <@> mkNamedAPI @"update-conversation-name" updateConversationName - <@> mkNamedAPI @"update-conversation-message-timer-unqualified" updateConversationMessageTimerUnqualified - <@> mkNamedAPI @"update-conversation-message-timer" updateConversationMessageTimer - <@> mkNamedAPI @"update-conversation-receipt-mode-unqualified" updateConversationReceiptModeUnqualified - <@> mkNamedAPI @"update-conversation-receipt-mode" updateConversationReceiptMode - <@> mkNamedAPI @"update-conversation-access-unqualified" updateConversationAccessUnqualified - <@> mkNamedAPI @"update-conversation-access@v2" updateConversationAccess - <@> mkNamedAPI @"update-conversation-access" updateConversationAccess + <@> mkNamedAPI @"member-typing-qualified" (callsFed isTypingQualified) + <@> mkNamedAPI @"remove-member-unqualified" (callsFed removeMemberUnqualified) + <@> mkNamedAPI @"remove-member" (callsFed removeMemberQualified) + <@> mkNamedAPI @"update-other-member-unqualified" (callsFed updateOtherMemberUnqualified) + <@> mkNamedAPI @"update-other-member" (callsFed updateOtherMember) + <@> mkNamedAPI @"update-conversation-name-deprecated" (callsFed updateUnqualifiedConversationName) + <@> mkNamedAPI @"update-conversation-name-unqualified" (callsFed updateUnqualifiedConversationName) + <@> mkNamedAPI @"update-conversation-name" (callsFed updateConversationName) + <@> mkNamedAPI @"update-conversation-message-timer-unqualified" (callsFed updateConversationMessageTimerUnqualified) + <@> mkNamedAPI @"update-conversation-message-timer" (callsFed updateConversationMessageTimer) + <@> mkNamedAPI @"update-conversation-receipt-mode-unqualified" (callsFed updateConversationReceiptModeUnqualified) + <@> mkNamedAPI @"update-conversation-receipt-mode" (callsFed updateConversationReceiptMode) + <@> mkNamedAPI @"update-conversation-access-unqualified" (callsFed updateConversationAccessUnqualified) + <@> mkNamedAPI @"update-conversation-access@v2" (callsFed updateConversationAccess) + <@> mkNamedAPI @"update-conversation-access" (callsFed updateConversationAccess) <@> mkNamedAPI @"get-conversation-self-unqualified" getLocalSelf <@> mkNamedAPI @"update-conversation-self-unqualified" updateUnqualifiedSelfMember <@> mkNamedAPI @"update-conversation-self" updateSelfMember diff --git a/services/galley/src/Galley/API/Public/Feature.hs b/services/galley/src/Galley/API/Public/Feature.hs index 2d4f06ea85..4dbc810de6 100644 --- a/services/galley/src/Galley/API/Public/Feature.hs +++ b/services/galley/src/Galley/API/Public/Feature.hs @@ -22,6 +22,7 @@ import Galley.API.Teams.Features import Galley.App import Galley.Cassandra.TeamFeatures import Imports +import Wire.API.Federation.API import Wire.API.Routes.API import Wire.API.Routes.Public.Galley.Feature import Wire.API.Team.Feature @@ -30,7 +31,7 @@ featureAPI :: API FeatureAPI GalleyEffects featureAPI = mkNamedAPI @'("get", SSOConfig) (getFeatureStatus @Cassandra . DoAuth) <@> mkNamedAPI @'("get", LegalholdConfig) (getFeatureStatus @Cassandra . DoAuth) - <@> mkNamedAPI @'("put", LegalholdConfig) (setFeatureStatus @Cassandra . DoAuth) + <@> mkNamedAPI @'("put", LegalholdConfig) (callsFed (setFeatureStatus @Cassandra . DoAuth)) <@> mkNamedAPI @'("get", SearchVisibilityAvailableConfig) (getFeatureStatus @Cassandra . DoAuth) <@> mkNamedAPI @'("put", SearchVisibilityAvailableConfig) (setFeatureStatus @Cassandra . DoAuth) <@> mkNamedAPI @'("get-deprecated", SearchVisibilityAvailableConfig) (getFeatureStatus @Cassandra . DoAuth) diff --git a/services/galley/src/Galley/API/Public/LegalHold.hs b/services/galley/src/Galley/API/Public/LegalHold.hs index 21d658d217..405d3ca61a 100644 --- a/services/galley/src/Galley/API/Public/LegalHold.hs +++ b/services/galley/src/Galley/API/Public/LegalHold.hs @@ -20,6 +20,7 @@ module Galley.API.Public.LegalHold where import Galley.API.LegalHold import Galley.App import Galley.Cassandra.TeamFeatures +import Wire.API.Federation.API import Wire.API.Routes.API import Wire.API.Routes.Public.Galley.LegalHold @@ -27,9 +28,9 @@ legalHoldAPI :: API LegalHoldAPI GalleyEffects legalHoldAPI = mkNamedAPI @"create-legal-hold-settings" (createSettings @Cassandra) <@> mkNamedAPI @"get-legal-hold-settings" (getSettings @Cassandra) - <@> mkNamedAPI @"delete-legal-hold-settings" (removeSettingsInternalPaging @Cassandra) + <@> mkNamedAPI @"delete-legal-hold-settings" (callsFed (callsFed (callsFed (removeSettingsInternalPaging @Cassandra)))) <@> mkNamedAPI @"get-legal-hold" getUserStatus - <@> mkNamedAPI @"consent-to-legal-hold" grantConsent - <@> mkNamedAPI @"request-legal-hold-device" (requestDevice @Cassandra) - <@> mkNamedAPI @"disable-legal-hold-for-user" disableForUser - <@> mkNamedAPI @"approve-legal-hold-device" (approveDevice @Cassandra) + <@> mkNamedAPI @"consent-to-legal-hold" (callsFed (callsFed (callsFed grantConsent))) + <@> mkNamedAPI @"request-legal-hold-device" (callsFed (callsFed (callsFed (requestDevice @Cassandra)))) + <@> mkNamedAPI @"disable-legal-hold-for-user" (callsFed (callsFed (callsFed disableForUser))) + <@> mkNamedAPI @"approve-legal-hold-device" (callsFed (callsFed (callsFed (approveDevice @Cassandra)))) diff --git a/services/galley/src/Galley/API/Public/MLS.hs b/services/galley/src/Galley/API/Public/MLS.hs index 93bd240b77..7581908ccf 100644 --- a/services/galley/src/Galley/API/Public/MLS.hs +++ b/services/galley/src/Galley/API/Public/MLS.hs @@ -19,13 +19,14 @@ module Galley.API.Public.MLS where import Galley.API.MLS import Galley.App +import Wire.API.Federation.API import Wire.API.Routes.API import Wire.API.Routes.Public.Galley.MLS mlsAPI :: API MLSAPI GalleyEffects mlsAPI = - mkNamedAPI @"mls-welcome-message" postMLSWelcomeFromLocalUser - <@> mkNamedAPI @"mls-message-v1" postMLSMessageFromLocalUserV1 - <@> mkNamedAPI @"mls-message" postMLSMessageFromLocalUser - <@> mkNamedAPI @"mls-commit-bundle" postMLSCommitBundleFromLocalUser + mkNamedAPI @"mls-welcome-message" (callsFed postMLSWelcomeFromLocalUser) + <@> mkNamedAPI @"mls-message-v1" (callsFed postMLSMessageFromLocalUserV1) + <@> mkNamedAPI @"mls-message" (callsFed postMLSMessageFromLocalUser) + <@> mkNamedAPI @"mls-commit-bundle" (callsFed postMLSCommitBundleFromLocalUser) <@> mkNamedAPI @"mls-public-keys" getMLSPublicKeys diff --git a/services/galley/src/Galley/API/Public/Messaging.hs b/services/galley/src/Galley/API/Public/Messaging.hs index 806484ae90..ae5a3248d9 100644 --- a/services/galley/src/Galley/API/Public/Messaging.hs +++ b/services/galley/src/Galley/API/Public/Messaging.hs @@ -19,12 +19,13 @@ module Galley.API.Public.Messaging where import Galley.API.Update import Galley.App +import Wire.API.Federation.API import Wire.API.Routes.API import Wire.API.Routes.Public.Galley.Messaging messagingAPI :: API MessagingAPI GalleyEffects messagingAPI = - mkNamedAPI @"post-otr-message-unqualified" postOtrMessageUnqualified + mkNamedAPI @"post-otr-message-unqualified" (callsFed postOtrMessageUnqualified) <@> mkNamedAPI @"post-otr-broadcast-unqualified" postOtrBroadcastUnqualified - <@> mkNamedAPI @"post-proteus-message" postProteusMessage + <@> mkNamedAPI @"post-proteus-message" (callsFed postProteusMessage) <@> mkNamedAPI @"post-proteus-broadcast" postProteusBroadcast diff --git a/services/galley/src/Galley/API/Public/TeamConversation.hs b/services/galley/src/Galley/API/Public/TeamConversation.hs index 359c69f1db..6aad651f3b 100644 --- a/services/galley/src/Galley/API/Public/TeamConversation.hs +++ b/services/galley/src/Galley/API/Public/TeamConversation.hs @@ -19,6 +19,7 @@ module Galley.API.Public.TeamConversation where import Galley.API.Teams import Galley.App +import Wire.API.Federation.API import Wire.API.Routes.API import Wire.API.Routes.Public.Galley.TeamConversation @@ -27,4 +28,4 @@ teamConversationAPI = mkNamedAPI @"get-team-conversation-roles" getTeamConversationRoles <@> mkNamedAPI @"get-team-conversations" getTeamConversations <@> mkNamedAPI @"get-team-conversation" getTeamConversation - <@> mkNamedAPI @"delete-team-conversation" deleteTeamConversation + <@> mkNamedAPI @"delete-team-conversation" (callsFed deleteTeamConversation) diff --git a/services/galley/src/Galley/API/Query.hs b/services/galley/src/Galley/API/Query.hs index 99fdf91f88..9d3be75637 100644 --- a/services/galley/src/Galley/API/Query.hs +++ b/services/galley/src/Galley/API/Query.hs @@ -141,16 +141,18 @@ getUnqualifiedConversation lusr cnv = do getConversation :: forall r. - Members - '[ ConversationStore, - ErrorS 'ConvNotFound, - ErrorS 'ConvAccessDenied, - Error FederationError, - Error InternalError, - FederatorAccess, - P.TinyLog - ] - r => + ( Members + '[ ConversationStore, + ErrorS 'ConvNotFound, + ErrorS 'ConvAccessDenied, + Error FederationError, + Error InternalError, + FederatorAccess, + P.TinyLog + ] + r, + CallsFed 'Galley "get-conversations" + ) => Local UserId -> Qualified ConvId -> Sem r Public.Conversation @@ -171,14 +173,16 @@ getConversation lusr cnv = do _convs -> throw $ FederationUnexpectedBody "expected one conversation, got multiple" getRemoteConversations :: - Members - '[ ConversationStore, - Error FederationError, - ErrorS 'ConvNotFound, - FederatorAccess, - P.TinyLog - ] - r => + ( Members + '[ ConversationStore, + Error FederationError, + ErrorS 'ConvNotFound, + FederatorAccess, + P.TinyLog + ] + r, + CallsFed 'Galley "get-conversations" + ) => Local UserId -> [Remote ConvId] -> Sem r [Public.Conversation] @@ -224,7 +228,9 @@ partitionGetConversationFailures = bimap concat concat . partitionEithers . map split (FailedGetConversation convs (FailedGetConversationRemotely _)) = Right convs getRemoteConversationsWithFailures :: - Members '[ConversationStore, FederatorAccess, P.TinyLog] r => + ( Members '[ConversationStore, FederatorAccess, P.TinyLog] r, + CallsFed 'Galley "get-conversations" + ) => Local UserId -> [Remote ConvId] -> Sem r ([FailedGetConversation], [Public.Conversation]) @@ -476,7 +482,7 @@ getConversationsInternal luser mids mstart msize = do | otherwise = pure True listConversations :: - Members '[ConversationStore, Error InternalError, FederatorAccess, P.TinyLog] r => + (Members '[ConversationStore, Error InternalError, FederatorAccess, P.TinyLog] r, CallsFed 'Galley "get-conversations") => Local UserId -> Public.ListConversations -> Sem r Public.ConversationsResponse diff --git a/services/galley/src/Galley/API/Teams.hs b/services/galley/src/Galley/API/Teams.hs index d4c7da58ed..20197ae4fc 100644 --- a/services/galley/src/Galley/API/Teams.hs +++ b/services/galley/src/Galley/API/Teams.hs @@ -132,6 +132,7 @@ import Wire.API.Error import Wire.API.Error.Galley import qualified Wire.API.Event.Conversation as Conv import Wire.API.Event.Team +import Wire.API.Federation.API import Wire.API.Federation.Error import qualified Wire.API.Message as Conv import qualified Wire.API.Notification as Public @@ -1101,23 +1102,27 @@ getTeamConversation zusr tid cid = do >>= noteS @'ConvNotFound deleteTeamConversation :: - Members - '[ CodeStore, - ConversationStore, - Error FederationError, - Error InvalidInput, - ErrorS 'ConvNotFound, - ErrorS 'InvalidOperation, - ErrorS 'NotATeamMember, - ErrorS ('ActionDenied 'DeleteConversation), - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input Env, - Input UTCTime, - TeamStore - ] - r => + ( Members + '[ CodeStore, + ConversationStore, + Error FederationError, + Error InvalidInput, + ErrorS 'ConvNotFound, + ErrorS 'InvalidOperation, + ErrorS 'NotATeamMember, + ErrorS ('ActionDenied 'DeleteConversation), + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input Env, + Input UTCTime, + TeamStore + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => Local UserId -> ConnId -> TeamId -> diff --git a/services/galley/src/Galley/API/Teams/Features.hs b/services/galley/src/Galley/API/Teams/Features.hs index 4556f935d7..be179c9c99 100644 --- a/services/galley/src/Galley/API/Teams/Features.hs +++ b/services/galley/src/Galley/API/Teams/Features.hs @@ -76,6 +76,7 @@ import Wire.API.Conversation.Role (Action (RemoveConversationMember)) import Wire.API.Error (ErrorS, throwS) import Wire.API.Error.Galley import qualified Wire.API.Event.FeatureConfig as Event +import Wire.API.Federation.API import qualified Wire.API.Routes.Internal.Galley.TeamFeatureNoConfigMulti as Multi import Wire.API.Team.Feature import Wire.API.Team.Member @@ -707,7 +708,13 @@ instance GetFeatureConfig db LegalholdConfig where False -> FeatureStatusDisabled pure $ setStatus status defFeatureStatus -instance SetFeatureConfig db LegalholdConfig where +instance + ( CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => + SetFeatureConfig db LegalholdConfig + where type SetConfigForTeamConstraints db LegalholdConfig (r :: EffectRow) = ( Bounded (PagingBounds InternalPaging TeamMember), diff --git a/services/galley/src/Galley/API/Update.hs b/services/galley/src/Galley/API/Update.hs index ce1f36980b..1065f4f92e 100644 --- a/services/galley/src/Galley/API/Update.hs +++ b/services/galley/src/Galley/API/Update.hs @@ -290,7 +290,11 @@ type UpdateConversationAccessEffects = ] updateConversationAccess :: - Members UpdateConversationAccessEffects r => + ( Members UpdateConversationAccessEffects r, + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation", + CallsFed 'Galley "on-conversation-updated" + ) => Local UserId -> ConnId -> Qualified ConvId -> @@ -302,7 +306,11 @@ updateConversationAccess lusr con qcnv update = do updateLocalConversation @'ConversationAccessDataTag lcnv (tUntagged lusr) (Just con) update updateConversationAccessUnqualified :: - Members UpdateConversationAccessEffects r => + ( Members UpdateConversationAccessEffects r, + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation", + CallsFed 'Galley "on-conversation-updated" + ) => Local UserId -> ConnId -> ConvId -> @@ -317,23 +325,28 @@ updateConversationAccessUnqualified lusr con cnv update = update updateConversationReceiptMode :: - Members - '[ BrigAccess, - ConversationStore, - Error FederationError, - ErrorS ('ActionDenied 'ModifyConversationReceiptMode), - ErrorS 'ConvNotFound, - ErrorS 'InvalidOperation, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input (Local ()), - Input Env, - Input UTCTime, - MemberStore, - TinyLog - ] - r => + ( Members + '[ BrigAccess, + ConversationStore, + Error FederationError, + ErrorS ('ActionDenied 'ModifyConversationReceiptMode), + ErrorS 'ConvNotFound, + ErrorS 'InvalidOperation, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input (Local ()), + Input Env, + Input UTCTime, + MemberStore, + TinyLog + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation", + CallsFed 'Galley "update-conversation" + ) => Local UserId -> ConnId -> Qualified ConvId -> @@ -369,7 +382,8 @@ updateRemoteConversation :: r, Members (HasConversationActionGalleyErrors tag) r, RethrowErrors (HasConversationActionGalleyErrors tag) (Error NoChanges : r), - SingI tag + SingI tag, + CallsFed 'Galley "update-conversation" ) => Remote ConvId -> Local UserId -> @@ -393,23 +407,28 @@ updateRemoteConversation rcnv lusr conn action = getUpdateResult $ do notifyRemoteConversationAction lusr (qualifyAs rcnv convUpdate) (Just conn) updateConversationReceiptModeUnqualified :: - Members - '[ BrigAccess, - ConversationStore, - Error FederationError, - ErrorS ('ActionDenied 'ModifyConversationReceiptMode), - ErrorS 'ConvNotFound, - ErrorS 'InvalidOperation, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input (Local ()), - Input Env, - Input UTCTime, - MemberStore, - TinyLog - ] - r => + ( Members + '[ BrigAccess, + ConversationStore, + Error FederationError, + ErrorS ('ActionDenied 'ModifyConversationReceiptMode), + ErrorS 'ConvNotFound, + ErrorS 'InvalidOperation, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input (Local ()), + Input Env, + Input UTCTime, + MemberStore, + TinyLog + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation", + CallsFed 'Galley "update-conversation" + ) => Local UserId -> ConnId -> ConvId -> @@ -418,19 +437,23 @@ updateConversationReceiptModeUnqualified :: updateConversationReceiptModeUnqualified lusr zcon cnv = updateConversationReceiptMode lusr zcon (tUntagged (qualifyAs lusr cnv)) updateConversationMessageTimer :: - Members - '[ ConversationStore, - ErrorS ('ActionDenied 'ModifyConversationMessageTimer), - ErrorS 'ConvNotFound, - ErrorS 'InvalidOperation, - Error FederationError, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input Env, - Input UTCTime - ] - r => + ( Members + '[ ConversationStore, + ErrorS ('ActionDenied 'ModifyConversationMessageTimer), + ErrorS 'ConvNotFound, + ErrorS 'InvalidOperation, + Error FederationError, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input Env, + Input UTCTime + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => Local UserId -> ConnId -> Qualified ConvId -> @@ -453,19 +476,23 @@ updateConversationMessageTimer lusr zcon qcnv update = qcnv updateConversationMessageTimerUnqualified :: - Members - '[ ConversationStore, - ErrorS ('ActionDenied 'ModifyConversationMessageTimer), - ErrorS 'ConvNotFound, - ErrorS 'InvalidOperation, - Error FederationError, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input Env, - Input UTCTime - ] - r => + ( Members + '[ ConversationStore, + ErrorS ('ActionDenied 'ModifyConversationMessageTimer), + ErrorS 'ConvNotFound, + ErrorS 'InvalidOperation, + Error FederationError, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input Env, + Input UTCTime + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => Local UserId -> ConnId -> ConvId -> @@ -474,22 +501,26 @@ updateConversationMessageTimerUnqualified :: updateConversationMessageTimerUnqualified lusr zcon cnv = updateConversationMessageTimer lusr zcon (tUntagged (qualifyAs lusr cnv)) deleteLocalConversation :: - Members - '[ CodeStore, - ConversationStore, - Error FederationError, - ErrorS 'NotATeamMember, - ErrorS ('ActionDenied 'DeleteConversation), - ErrorS 'ConvNotFound, - ErrorS 'InvalidOperation, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input Env, - Input UTCTime, - TeamStore - ] - r => + ( Members + '[ CodeStore, + ConversationStore, + Error FederationError, + ErrorS 'NotATeamMember, + ErrorS ('ActionDenied 'DeleteConversation), + ErrorS 'ConvNotFound, + ErrorS 'InvalidOperation, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input Env, + Input UTCTime, + TeamStore + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => Local UserId -> ConnId -> Local ConvId -> @@ -697,7 +728,9 @@ joinConversationByReusableCode :: TeamFeatureStore db ] r, - FeaturePersistentConstraint db GuestLinksConfig + FeaturePersistentConstraint db GuestLinksConfig, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-new-remote-conversation" ) => Local UserId -> ConnId -> @@ -728,7 +761,9 @@ joinConversationById :: TeamStore, TeamFeatureStore db ] - r + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-new-remote-conversation" ) => Local UserId -> ConnId -> @@ -739,23 +774,26 @@ joinConversationById lusr zcon cnv = do joinConversation @db lusr zcon conv LinkAccess joinConversation :: - Members - '[ BrigAccess, - ConversationStore, - FederatorAccess, - ErrorS 'ConvAccessDenied, - ErrorS 'InvalidOperation, - ErrorS 'NotATeamMember, - ErrorS 'TooManyMembers, - ExternalAccess, - GundeckAccess, - Input Opts, - Input UTCTime, - MemberStore, - TeamStore, - TeamFeatureStore db - ] - r => + ( Members + '[ BrigAccess, + ConversationStore, + FederatorAccess, + ErrorS 'ConvAccessDenied, + ErrorS 'InvalidOperation, + ErrorS 'NotATeamMember, + ErrorS 'TooManyMembers, + ExternalAccess, + GundeckAccess, + Input Opts, + Input UTCTime, + MemberStore, + TeamStore, + TeamFeatureStore db + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-new-remote-conversation" + ) => Local UserId -> ConnId -> Data.Conversation -> @@ -785,33 +823,37 @@ joinConversation lusr zcon conv access = do action addMembers :: - Members - '[ BrigAccess, - ConversationStore, - Error FederationError, - Error InternalError, - ErrorS ('ActionDenied 'AddConversationMember), - ErrorS ('ActionDenied 'LeaveConversation), - ErrorS 'ConvAccessDenied, - ErrorS 'ConvNotFound, - ErrorS 'InvalidOperation, - ErrorS 'NotConnected, - ErrorS 'NotATeamMember, - ErrorS 'TooManyMembers, - ErrorS 'MissingLegalholdConsent, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input Env, - Input Opts, - Input UTCTime, - LegalHoldStore, - MemberStore, - ProposalStore, - TeamStore, - TinyLog - ] - r => + ( Members + '[ BrigAccess, + ConversationStore, + Error FederationError, + Error InternalError, + ErrorS ('ActionDenied 'AddConversationMember), + ErrorS ('ActionDenied 'LeaveConversation), + ErrorS 'ConvAccessDenied, + ErrorS 'ConvNotFound, + ErrorS 'InvalidOperation, + ErrorS 'NotConnected, + ErrorS 'NotATeamMember, + ErrorS 'TooManyMembers, + ErrorS 'MissingLegalholdConsent, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input Env, + Input Opts, + Input UTCTime, + LegalHoldStore, + MemberStore, + ProposalStore, + TeamStore, + TinyLog + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => Local UserId -> ConnId -> Qualified ConvId -> @@ -824,33 +866,37 @@ addMembers lusr zcon qcnv (InviteQualified users role) = do ConversationJoin users role addMembersUnqualifiedV2 :: - Members - '[ BrigAccess, - ConversationStore, - Error FederationError, - Error InternalError, - ErrorS ('ActionDenied 'AddConversationMember), - ErrorS ('ActionDenied 'LeaveConversation), - ErrorS 'ConvAccessDenied, - ErrorS 'ConvNotFound, - ErrorS 'InvalidOperation, - ErrorS 'NotConnected, - ErrorS 'NotATeamMember, - ErrorS 'TooManyMembers, - ErrorS 'MissingLegalholdConsent, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input Env, - Input Opts, - Input UTCTime, - LegalHoldStore, - MemberStore, - ProposalStore, - TeamStore, - TinyLog - ] - r => + ( Members + '[ BrigAccess, + ConversationStore, + Error FederationError, + Error InternalError, + ErrorS ('ActionDenied 'AddConversationMember), + ErrorS ('ActionDenied 'LeaveConversation), + ErrorS 'ConvAccessDenied, + ErrorS 'ConvNotFound, + ErrorS 'InvalidOperation, + ErrorS 'NotConnected, + ErrorS 'NotATeamMember, + ErrorS 'TooManyMembers, + ErrorS 'MissingLegalholdConsent, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input Env, + Input Opts, + Input UTCTime, + LegalHoldStore, + MemberStore, + ProposalStore, + TeamStore, + TinyLog + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => Local UserId -> ConnId -> ConvId -> @@ -863,33 +909,37 @@ addMembersUnqualifiedV2 lusr zcon cnv (InviteQualified users role) = do ConversationJoin users role addMembersUnqualified :: - Members - '[ BrigAccess, - ConversationStore, - Error FederationError, - Error InternalError, - ErrorS ('ActionDenied 'AddConversationMember), - ErrorS ('ActionDenied 'LeaveConversation), - ErrorS 'ConvAccessDenied, - ErrorS 'ConvNotFound, - ErrorS 'InvalidOperation, - ErrorS 'NotConnected, - ErrorS 'NotATeamMember, - ErrorS 'TooManyMembers, - ErrorS 'MissingLegalholdConsent, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input Env, - Input Opts, - Input UTCTime, - LegalHoldStore, - MemberStore, - ProposalStore, - TeamStore, - TinyLog - ] - r => + ( Members + '[ BrigAccess, + ConversationStore, + Error FederationError, + Error InternalError, + ErrorS ('ActionDenied 'AddConversationMember), + ErrorS ('ActionDenied 'LeaveConversation), + ErrorS 'ConvAccessDenied, + ErrorS 'ConvNotFound, + ErrorS 'InvalidOperation, + ErrorS 'NotConnected, + ErrorS 'NotATeamMember, + ErrorS 'TooManyMembers, + ErrorS 'MissingLegalholdConsent, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input Env, + Input Opts, + Input UTCTime, + LegalHoldStore, + MemberStore, + ProposalStore, + TeamStore, + TinyLog + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => Local UserId -> ConnId -> ConvId -> @@ -968,21 +1018,25 @@ updateUnqualifiedSelfMember lusr zcon cnv update = do updateSelfMember lusr zcon (tUntagged lcnv) update updateOtherMemberLocalConv :: - Members - '[ ConversationStore, - ErrorS ('ActionDenied 'ModifyOtherConversationMember), - ErrorS 'InvalidTarget, - ErrorS 'InvalidOperation, - ErrorS 'ConvNotFound, - ErrorS 'ConvMemberNotFound, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input Env, - Input UTCTime, - MemberStore - ] - r => + ( Members + '[ ConversationStore, + ErrorS ('ActionDenied 'ModifyOtherConversationMember), + ErrorS 'InvalidTarget, + ErrorS 'InvalidOperation, + ErrorS 'ConvNotFound, + ErrorS 'ConvMemberNotFound, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input Env, + Input UTCTime, + MemberStore + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => Local ConvId -> Local UserId -> ConnId -> @@ -996,21 +1050,25 @@ updateOtherMemberLocalConv lcnv lusr con qvictim update = void . getUpdateResult ConversationMemberUpdate qvictim update updateOtherMemberUnqualified :: - Members - '[ ConversationStore, - ErrorS ('ActionDenied 'ModifyOtherConversationMember), - ErrorS 'InvalidTarget, - ErrorS 'InvalidOperation, - ErrorS 'ConvNotFound, - ErrorS 'ConvMemberNotFound, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input Env, - Input UTCTime, - MemberStore - ] - r => + ( Members + '[ ConversationStore, + ErrorS ('ActionDenied 'ModifyOtherConversationMember), + ErrorS 'InvalidTarget, + ErrorS 'InvalidOperation, + ErrorS 'ConvNotFound, + ErrorS 'ConvMemberNotFound, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input Env, + Input UTCTime, + MemberStore + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => Local UserId -> ConnId -> ConvId -> @@ -1023,22 +1081,26 @@ updateOtherMemberUnqualified lusr zcon cnv victim update = do updateOtherMemberLocalConv lcnv lusr zcon (tUntagged lvictim) update updateOtherMember :: - Members - '[ ConversationStore, - Error FederationError, - ErrorS ('ActionDenied 'ModifyOtherConversationMember), - ErrorS 'InvalidTarget, - ErrorS 'InvalidOperation, - ErrorS 'ConvNotFound, - ErrorS 'ConvMemberNotFound, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input Env, - Input UTCTime, - MemberStore - ] - r => + ( Members + '[ ConversationStore, + Error FederationError, + ErrorS ('ActionDenied 'ModifyOtherConversationMember), + ErrorS 'InvalidTarget, + ErrorS 'InvalidOperation, + ErrorS 'ConvNotFound, + ErrorS 'ConvMemberNotFound, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input Env, + Input UTCTime, + MemberStore + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => Local UserId -> ConnId -> Qualified ConvId -> @@ -1060,22 +1122,27 @@ updateOtherMemberRemoteConv :: updateOtherMemberRemoteConv _ _ _ _ _ = throw FederationNotImplemented removeMemberUnqualified :: - Members - '[ ConversationStore, - Error InternalError, - ErrorS ('ActionDenied 'RemoveConversationMember), - ErrorS 'ConvNotFound, - ErrorS 'InvalidOperation, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input Env, - Input UTCTime, - MemberStore, - ProposalStore, - TinyLog - ] - r => + ( Members + '[ ConversationStore, + Error InternalError, + ErrorS ('ActionDenied 'RemoveConversationMember), + ErrorS 'ConvNotFound, + ErrorS 'InvalidOperation, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input Env, + Input UTCTime, + MemberStore, + ProposalStore, + TinyLog + ] + r, + CallsFed 'Galley "leave-conversation", + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => Local UserId -> ConnId -> ConvId -> @@ -1087,22 +1154,27 @@ removeMemberUnqualified lusr con cnv victim = do removeMemberQualified lusr con (tUntagged lcnv) (tUntagged lvictim) removeMemberQualified :: - Members - '[ ConversationStore, - Error InternalError, - ErrorS ('ActionDenied 'RemoveConversationMember), - ErrorS 'ConvNotFound, - ErrorS 'InvalidOperation, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input Env, - Input UTCTime, - MemberStore, - ProposalStore, - TinyLog - ] - r => + ( Members + '[ ConversationStore, + Error InternalError, + ErrorS ('ActionDenied 'RemoveConversationMember), + ErrorS 'ConvNotFound, + ErrorS 'InvalidOperation, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input Env, + Input UTCTime, + MemberStore, + ProposalStore, + TinyLog + ] + r, + CallsFed 'Galley "leave-conversation", + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => Local UserId -> ConnId -> Qualified ConvId -> @@ -1118,13 +1190,15 @@ removeMemberQualified lusr con qcnv victim = victim removeMemberFromRemoteConv :: - Members - '[ FederatorAccess, - ErrorS ('ActionDenied 'RemoveConversationMember), - ErrorS 'ConvNotFound, - Input UTCTime - ] - r => + ( Members + '[ FederatorAccess, + ErrorS ('ActionDenied 'RemoveConversationMember), + ErrorS 'ConvNotFound, + Input UTCTime + ] + r, + CallsFed 'Galley "leave-conversation" + ) => Remote ConvId -> Local UserId -> Qualified UserId -> @@ -1155,23 +1229,27 @@ removeMemberFromRemoteConv cnv lusr victim -- | Remove a member from a local conversation. removeMemberFromLocalConv :: - Members - '[ ConversationStore, - Error InternalError, - ErrorS ('ActionDenied 'LeaveConversation), - ErrorS ('ActionDenied 'RemoveConversationMember), - ErrorS 'ConvNotFound, - ErrorS 'InvalidOperation, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input Env, - Input UTCTime, - MemberStore, - ProposalStore, - TinyLog - ] - r => + ( Members + '[ ConversationStore, + Error InternalError, + ErrorS ('ActionDenied 'LeaveConversation), + ErrorS ('ActionDenied 'RemoveConversationMember), + ErrorS 'ConvNotFound, + ErrorS 'InvalidOperation, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input Env, + Input UTCTime, + MemberStore, + ProposalStore, + TinyLog + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => Local ConvId -> Local UserId -> Maybe ConnId -> @@ -1193,21 +1271,25 @@ removeMemberFromLocalConv lcnv lusr con victim -- OTR postProteusMessage :: - Members - '[ BotAccess, - BrigAccess, - ClientStore, - ConversationStore, - FederatorAccess, - GundeckAccess, - ExternalAccess, - Input Opts, - Input UTCTime, - MemberStore, - TeamStore, - TinyLog - ] - r => + ( Members + '[ BotAccess, + BrigAccess, + ClientStore, + ConversationStore, + FederatorAccess, + GundeckAccess, + ExternalAccess, + Input Opts, + Input UTCTime, + MemberStore, + TeamStore, + TinyLog + ] + r, + CallsFed 'Brig "get-user-clients", + CallsFed 'Galley "on-message-sent", + CallsFed 'Galley "send-message" + ) => Local UserId -> ConnId -> Qualified ConvId -> @@ -1292,7 +1374,9 @@ postBotMessageUnqualified :: TinyLog, Input UTCTime ] - r + r, + CallsFed 'Galley "on-message-sent", + CallsFed 'Brig "get-user-clients" ) => BotId -> ConvId -> @@ -1336,21 +1420,24 @@ postOtrBroadcastUnqualified sender zcon = (postBroadcast sender (Just zcon)) postOtrMessageUnqualified :: - Members - '[ BotAccess, - BrigAccess, - ClientStore, - ConversationStore, - FederatorAccess, - GundeckAccess, - ExternalAccess, - MemberStore, - Input Opts, - Input UTCTime, - TeamStore, - TinyLog - ] - r => + ( Members + '[ BotAccess, + BrigAccess, + ClientStore, + ConversationStore, + FederatorAccess, + GundeckAccess, + ExternalAccess, + MemberStore, + Input Opts, + Input UTCTime, + TeamStore, + TinyLog + ] + r, + CallsFed 'Galley "on-message-sent", + CallsFed 'Brig "get-user-clients" + ) => Local UserId -> ConnId -> ConvId -> @@ -1365,20 +1452,24 @@ postOtrMessageUnqualified sender zcon cnv = (runLocalInput sender . postQualifiedOtrMessage User (tUntagged sender) (Just zcon) lcnv) updateConversationName :: - Members - '[ ConversationStore, - Error FederationError, - Error InvalidInput, - ErrorS ('ActionDenied 'ModifyConversationName), - ErrorS 'ConvNotFound, - ErrorS 'InvalidOperation, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input Env, - Input UTCTime - ] - r => + ( Members + '[ ConversationStore, + Error FederationError, + Error InvalidInput, + ErrorS ('ActionDenied 'ModifyConversationName), + ErrorS 'ConvNotFound, + ErrorS 'InvalidOperation, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input Env, + Input UTCTime + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => Local UserId -> ConnId -> Qualified ConvId -> @@ -1393,19 +1484,23 @@ updateConversationName lusr zcon qcnv convRename = do convRename updateUnqualifiedConversationName :: - Members - '[ ConversationStore, - Error InvalidInput, - ErrorS ('ActionDenied 'ModifyConversationName), - ErrorS 'ConvNotFound, - ErrorS 'InvalidOperation, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input Env, - Input UTCTime - ] - r => + ( Members + '[ ConversationStore, + Error InvalidInput, + ErrorS ('ActionDenied 'ModifyConversationName), + ErrorS 'ConvNotFound, + ErrorS 'InvalidOperation, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input Env, + Input UTCTime + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => Local UserId -> ConnId -> ConvId -> @@ -1416,19 +1511,23 @@ updateUnqualifiedConversationName lusr zcon cnv rename = do updateLocalConversationName lusr zcon lcnv rename updateLocalConversationName :: - Members - '[ ConversationStore, - Error InvalidInput, - ErrorS ('ActionDenied 'ModifyConversationName), - ErrorS 'ConvNotFound, - ErrorS 'InvalidOperation, - ExternalAccess, - FederatorAccess, - GundeckAccess, - Input Env, - Input UTCTime - ] - r => + ( Members + '[ ConversationStore, + Error InvalidInput, + ErrorS ('ActionDenied 'ModifyConversationName), + ErrorS 'ConvNotFound, + ErrorS 'InvalidOperation, + ExternalAccess, + FederatorAccess, + GundeckAccess, + Input Env, + Input UTCTime + ] + r, + CallsFed 'Galley "on-conversation-updated", + CallsFed 'Galley "on-mls-message-sent", + CallsFed 'Galley "on-new-remote-conversation" + ) => Local UserId -> ConnId -> Local ConvId -> @@ -1439,16 +1538,18 @@ updateLocalConversationName lusr zcon lcnv rename = updateLocalConversation @'ConversationRenameTag lcnv (tUntagged lusr) (Just zcon) rename isTypingQualified :: - Members - '[ GundeckAccess, - ErrorS 'ConvNotFound, - Input (Local ()), - Input UTCTime, - MemberStore, - FederatorAccess, - WaiRoutes - ] - r => + ( Members + '[ GundeckAccess, + ErrorS 'ConvNotFound, + Input (Local ()), + Input UTCTime, + MemberStore, + FederatorAccess, + WaiRoutes + ] + r, + CallsFed 'Galley "on-typing-indicator-updated" + ) => Local UserId -> ConnId -> Qualified ConvId -> diff --git a/services/galley/src/Galley/API/Util.hs b/services/galley/src/Galley/API/Util.hs index 3d16bd5d25..c31f30bc19 100644 --- a/services/galley/src/Galley/API/Util.hs +++ b/services/galley/src/Galley/API/Util.hs @@ -721,7 +721,7 @@ fromConversationCreated loc rc@ConversationCreated {..} = -- | Notify remote users of being added to a new conversation registerRemoteConversationMemberships :: - Member FederatorAccess r => + (Member FederatorAccess r, CallsFed 'Galley "on-conversation-created") => -- | The time stamp when the conversation was created UTCTime -> -- | The domain of the user that created the conversation diff --git a/services/galley/src/Galley/Aws.hs b/services/galley/src/Galley/Aws.hs index 1ef921a705..c1ee608b6f 100644 --- a/services/galley/src/Galley/Aws.hs +++ b/services/galley/src/Galley/Aws.hs @@ -109,12 +109,12 @@ mkEnv lgr mgr opts = do mkAwsEnv g = do baseEnv <- AWS.newEnv AWS.discover - <&> AWS.configure (sqs (opts ^. awsEndpoint)) + <&> AWS.configureService (sqs (opts ^. awsEndpoint)) pure $ baseEnv - { AWS.envLogger = awsLogger g, - AWS.envRetryCheck = retryCheck, - AWS.envManager = mgr + { AWS.logger = awsLogger g, + AWS.retryCheck = retryCheck, + AWS.manager = mgr } awsLogger g l = Logger.log g (mapLevel l) . Logger.msg . toLazyByteString mapLevel AWS.Info = Logger.Info @@ -183,5 +183,5 @@ canRetry :: MonadIO m => Either AWS.Error a -> m Bool canRetry (Right _) = pure False canRetry (Left e) = case e of AWS.TransportError (HttpExceptionRequest _ ResponseTimeout) -> pure True - AWS.ServiceError se | se ^. AWS.serviceCode == AWS.ErrorCode "RequestThrottled" -> pure True + AWS.ServiceError se | se ^. AWS.serviceError_code == AWS.ErrorCode "RequestThrottled" -> pure True _ -> pure False diff --git a/services/galley/src/Galley/Intra/Push/Internal.hs b/services/galley/src/Galley/Intra/Push/Internal.hs index bd0165787f..5232489a3c 100644 --- a/services/galley/src/Galley/Intra/Push/Internal.hs +++ b/services/galley/src/Galley/Intra/Push/Internal.hs @@ -1,4 +1,3 @@ -{-# LANGUAGE StrictData #-} {-# LANGUAGE TemplateHaskell #-} -- This file is part of the Wire Server implementation. @@ -25,7 +24,6 @@ import Control.Lens (makeLenses, set, view, (.~)) import Data.Aeson (Object) import Data.Id (ConnId, UserId) import Data.Json.Util -import Data.List.Extra (chunksOf) import Data.List.NonEmpty (NonEmpty, nonEmpty) import Data.List1 import Data.Qualified @@ -39,8 +37,7 @@ import Galley.Types.Conversations.Members import Gundeck.Types.Push.V2 (RecipientClients (..)) import qualified Gundeck.Types.Push.V2 as Gundeck import Imports hiding (forkIO) -import Safe (headDef, tailDef) -import UnliftIO.Async (mapConcurrently) +import UnliftIO.Async (mapConcurrently_) import Wire.API.Event.Conversation (Event (evtFrom)) import qualified Wire.API.Event.FeatureConfig as FeatureConfig import qualified Wire.API.Event.Team as Teams @@ -102,33 +99,51 @@ pushLocal ps = do let limit = currentFanoutLimit opts -- Do not fan out for very large teams let (asyncs, syncs) = partition _pushAsync (removeIfLargeFanout limit $ toList ps) - traverse_ (asyncCall Gundeck . json) (pushes asyncs) - void $ mapConcurrently (call Gundeck . json) (pushes syncs) + traverse_ (asyncCall Gundeck <=< jsonChunkedIO) (pushes asyncs) + mapConcurrently_ (call Gundeck <=< jsonChunkedIO) (pushes syncs) where - pushes = fst . foldr chunk ([], 0) - chunk p (pss, !n) = - let r = recipientList p - nr = length r - in if n + nr > maxRecipients - then - let pss' = map (pure . toPush p) (chunksOf maxRecipients r) - in (pss' ++ pss, 0) - else - let hd = headDef [] pss - tl = tailDef [] pss - in ((toPush p r : hd) : tl, n + nr) + pushes :: [PushTo UserId] -> [[Gundeck.Push]] + pushes = map (map (\p -> toPush p (recipientList p))) . chunk 0 [] + + chunk :: Int -> [PushTo a] -> [PushTo a] -> [[PushTo a]] + chunk _ acc [] = [acc] + chunk n acc (y : ys) + | n >= maxRecipients = acc : chunk 0 [] (y : ys) + | otherwise = + let totalLength = (n + length (_pushRecipients y)) + in if totalLength > maxRecipients + then + let (y1, y2) = splitPush (maxRecipients - n) y + in chunk maxRecipients (y1 : acc) (y2 : ys) + else chunk totalLength (y : acc) ys + + -- n must be strictly > 0 and < length (_pushRecipients p) + splitPush :: Int -> PushTo a -> (PushTo a, PushTo a) + splitPush n p = + let (r1, r2) = splitAt n (toList (_pushRecipients p)) + in (p {_pushRecipients = fromJust $ maybeList1 r1}, p {_pushRecipients = fromJust $ maybeList1 r2}) + + maxRecipients :: Int maxRecipients = 128 + + recipientList :: PushTo UserId -> [Gundeck.Recipient] recipientList p = map (toRecipient p) . toList $ _pushRecipients p + + toPush :: PushTo user -> [Gundeck.Recipient] -> Gundeck.Push toPush p r = let pload = Gundeck.singletonPayload (pushJson p) in Gundeck.newPush (pushOrigin p) (unsafeRange (Set.fromList r)) pload & Gundeck.pushOriginConnection .~ _pushConn p & Gundeck.pushTransient .~ _pushTransient p & maybe id (set Gundeck.pushNativePriority) (_pushNativePriority p) + + toRecipient :: PushTo user -> RecipientBy UserId -> Gundeck.Recipient toRecipient p r = Gundeck.recipient (_recipientUserId r) (_pushRoute p) & Gundeck.recipientClients .~ _recipientClients r + -- Ensure that under no circumstances we exceed the threshold + removeIfLargeFanout :: Integral a => Range n m a -> [PushTo user] -> [PushTo user] removeIfLargeFanout limit = filter ( \p -> diff --git a/services/galley/src/Galley/Options.hs b/services/galley/src/Galley/Options.hs index edb3850d29..844ca39064 100644 --- a/services/galley/src/Galley/Options.hs +++ b/services/galley/src/Galley/Options.hs @@ -25,6 +25,7 @@ module Galley.Options setExposeInvitationURLsTeamAllowlist, setMaxConvSize, setIntraListing, + setDisabledAPIVersions, setConversationCodeURI, setConcurrentDeletionEvents, setDeleteConvThrottleMillis, @@ -66,6 +67,7 @@ import Imports import System.Logger.Extended (Level, LogFormat) import Util.Options import Util.Options.Common +import Wire.API.Routes.Version import Wire.API.Team.Member data Settings = Settings @@ -113,7 +115,8 @@ data Settings = Settings _setEnableIndexedBillingTeamMembers :: !(Maybe Bool), _setMlsPrivateKeyPaths :: !(Maybe MLSPrivateKeyPaths), -- | FUTUREWORK: 'setFeatureFlags' should be renamed to 'setFeatureConfigs' in all types. - _setFeatureFlags :: !FeatureFlags + _setFeatureFlags :: !FeatureFlags, + _setDisabledAPIVersions :: Maybe (Set Version) } deriving (Show, Generic) diff --git a/services/galley/src/Galley/Run.hs b/services/galley/src/Galley/Run.hs index 81dfa216da..b528a6c054 100644 --- a/services/galley/src/Galley/Run.hs +++ b/services/galley/src/Galley/Run.hs @@ -93,7 +93,7 @@ mkApp opts = let logger = env ^. App.applog let middlewares = - versionMiddleware + versionMiddleware (opts ^. optSettings . setDisabledAPIVersions . traverse) . servantPlusWAIPrometheusMiddleware API.sitemap (Proxy @CombinedAPI) . GZip.gunzip . GZip.gzip GZip.def diff --git a/services/galley/test/integration/API.hs b/services/galley/test/integration/API.hs index 1f37559070..f193cbfee6 100644 --- a/services/galley/test/integration/API.hs +++ b/services/galley/test/integration/API.hs @@ -1141,8 +1141,10 @@ postMessageQualifiedRemoteOwningBackendFailure = do let brigApi _ = mkHandler @(FedApi 'Brig) EmptyAPI let galleyApi _ = mkHandler @(FedApi 'Galley) $ - Named @"send-message" $ \_ _ -> - throwError err503 {errBody = "Down for maintenance."} + Named @"send-message" $ + callsFed $ + callsFed $ \_ _ -> + throwError err503 {errBody = "Down for maintenance."} (resp2, _requests) <- postProteusMessageQualifiedWithMockFederator aliceUnqualified aliceClient convId [] "data" Message.MismatchReportAll brigApi galleyApi @@ -1181,8 +1183,10 @@ postMessageQualifiedRemoteOwningBackendSuccess = do message = [(bobOwningDomain, bobClient, "text-for-bob"), (deeRemote, deeClient, "text-for-dee")] brigApi _ = mkHandler @(FedApi 'Brig) EmptyAPI galleyApi _ = mkHandler @(FedApi 'Galley) $ - Named @"send-message" $ \_ _ -> - pure (F.MessageSendResponse (Right mss)) + Named @"send-message" $ + callsFed $ + callsFed $ \_ _ -> + pure (F.MessageSendResponse (Right mss)) (resp2, _requests) <- postProteusMessageQualifiedWithMockFederator aliceUnqualified aliceClient convId message "data" Message.MismatchReportAll brigApi galleyApi diff --git a/services/galley/test/integration/API/Federation/Util.hs b/services/galley/test/integration/API/Federation/Util.hs index 6bdd39e2f7..727a97c4f2 100644 --- a/services/galley/test/integration/API/Federation/Util.hs +++ b/services/galley/test/integration/API/Federation/Util.hs @@ -23,6 +23,7 @@ import GHC.TypeLits import Imports import Servant import Wire.API.Federation.Domain +import Wire.API.MakesFederatedCall import Wire.API.Routes.Named import Wire.API.VersionInfo @@ -38,6 +39,9 @@ instance HasTrivialHandler api => HasTrivialHandler ((path :: Symbol) :> api) wh instance HasTrivialHandler api => HasTrivialHandler (OriginDomainHeader :> api) where trivialHandler name _ = trivialHandler @api name +instance HasTrivialHandler api => HasTrivialHandler (MakesFederatedCall comp name :> api) where + trivialHandler name _ = trivialHandler @api name + instance HasTrivialHandler api => HasTrivialHandler (ReqBody cs a :> api) where trivialHandler name _ = trivialHandler @api name diff --git a/services/galley/test/integration/TestSetup.hs b/services/galley/test/integration/TestSetup.hs index e01fc52b14..9714e98fc4 100644 --- a/services/galley/test/integration/TestSetup.hs +++ b/services/galley/test/integration/TestSetup.hs @@ -130,7 +130,7 @@ instance MonadHttp TestM where runFedClient :: forall (name :: Symbol) comp m api. - ( HasFedEndpoint comp api name, + ( HasUnsafeFedEndpoint comp api name, Servant.HasClient Servant.ClientM api, MonadIO m ) => diff --git a/services/galley/test/unit/Test/Galley/Mapping.hs b/services/galley/test/unit/Test/Galley/Mapping.hs index 7bffa0c802..b1bd89808d 100644 --- a/services/galley/test/unit/Test/Galley/Mapping.hs +++ b/services/galley/test/unit/Test/Galley/Mapping.hs @@ -25,10 +25,15 @@ import Data.Domain import Data.Id import Data.Qualified import qualified Data.Set as Set +import Galley.API.Error (InternalError) import Galley.API.Mapping import qualified Galley.Data.Conversation as Data import Galley.Types.Conversations.Members import Imports +import Polysemy (Sem) +import qualified Polysemy as P +import qualified Polysemy.Error as P +import qualified Polysemy.TinyLog as P import Test.Tasty import Test.Tasty.QuickCheck import Wire.API.Conversation @@ -38,35 +43,39 @@ import Wire.API.Federation.API.Galley ( RemoteConvMembers (..), RemoteConversation (..), ) +import qualified Wire.Sem.Logger as P + +run :: Sem '[P.TinyLog, P.Error InternalError] a -> Either InternalError a +run = P.run . P.runError . P.discardLogs tests :: TestTree tests = testGroup "ConversationMapping" [ testProperty "conversation view for a valid user is non-empty" $ - \(ConvWithLocalUser c luid) -> isJust (conversationViewMaybe luid c), + \(ConvWithLocalUser c luid) -> isRight (run (conversationView luid c)), testProperty "self user in conversation view is correct" $ \(ConvWithLocalUser c luid) -> - fmap (memId . cmSelf . cnvMembers) (conversationViewMaybe luid c) - == Just (tUntagged luid), + fmap (memId . cmSelf . cnvMembers) (run (conversationView luid c)) + == Right (tUntagged luid), testProperty "conversation view metadata is correct" $ \(ConvWithLocalUser c luid) -> - fmap cnvMetadata (conversationViewMaybe luid c) - == Just (Data.convMetadata c), + fmap cnvMetadata (run (conversationView luid c)) + == Right (Data.convMetadata c), testProperty "other members in conversation view do not contain self" $ - \(ConvWithLocalUser c luid) -> case conversationViewMaybe luid c of - Nothing -> False - Just cnv -> + \(ConvWithLocalUser c luid) -> case run $ conversationView luid c of + Left _ -> False + Right cnv -> tUntagged luid `notElem` map omQualifiedId (cmOthers (cnvMembers cnv)), testProperty "conversation view contains all users" $ \(ConvWithLocalUser c luid) -> - fmap (sort . cnvUids) (conversationViewMaybe luid c) - == Just (sort (convUids (tDomain luid) c)), + fmap (sort . cnvUids) (run (conversationView luid c)) + == Right (sort (convUids (tDomain luid) c)), testProperty "conversation view for an invalid user is empty" $ \(RandomConversation c) luid -> notElem (tUnqualified luid) (map lmId (Data.convLocalMembers c)) ==> - isNothing (conversationViewMaybe luid c), + isLeft (run (conversationView luid c)), testProperty "remote conversation view for a valid user is non-empty" $ \(ConvWithRemoteUser c ruid) dom -> qDomain (tUntagged ruid) /= dom ==> diff --git a/services/gundeck/default.nix b/services/gundeck/default.nix index 3b4c5dc779..917012e8fe 100644 --- a/services/gundeck/default.nix +++ b/services/gundeck/default.nix @@ -6,6 +6,7 @@ , aeson , aeson-pretty , amazonka +, amazonka-core , amazonka-sns , amazonka-sqs , async @@ -99,6 +100,7 @@ mkDerivation { libraryHaskellDepends = [ aeson amazonka + amazonka-core amazonka-sns amazonka-sqs async diff --git a/services/gundeck/gundeck.cabal b/services/gundeck/gundeck.cabal index 82f74e893e..1a079e716f 100644 --- a/services/gundeck/gundeck.cabal +++ b/services/gundeck/gundeck.cabal @@ -98,9 +98,10 @@ library build-depends: aeson >=2.0.1.0 - , amazonka >=1.3.7 - , amazonka-sns >=1.3.7 - , amazonka-sqs >=1.3.7 + , amazonka >=2 + , amazonka-core >=2 + , amazonka-sns >=2 + , amazonka-sqs >=2 , async >=2.0 , attoparsec >=0.10 , auto-update >=0.1 diff --git a/services/gundeck/src/Gundeck/Aws.hs b/services/gundeck/src/Gundeck/Aws.hs index 54c944dc88..ab9f7e839d 100644 --- a/services/gundeck/src/Gundeck/Aws.hs +++ b/services/gundeck/src/Gundeck/Aws.hs @@ -54,8 +54,9 @@ module Gundeck.Aws ) where -import Amazonka (AWSRequest, AWSResponse, serviceAbbrev, serviceCode, serviceMessage, serviceStatus) +import Amazonka (AWSRequest, AWSResponse, serviceError_abbrev, serviceError_code, serviceError_message, serviceError_status) import qualified Amazonka as AWS +import qualified Amazonka.Data.Text as AWS import qualified Amazonka.SNS as SNS import qualified Amazonka.SNS.Lens as SNS import qualified Amazonka.SQS as SQS @@ -160,14 +161,14 @@ mkEnv lgr opts mgr = do mkAwsEnv g sqs sns = do baseEnv <- AWS.newEnv AWS.discover - <&> AWS.configure sqs - <&> AWS.configure (sns & set AWS.serviceTimeout (Just (AWS.Seconds 5))) + <&> AWS.configureService sqs + <&> AWS.configureService (sns & set AWS.service_timeout (Just (AWS.Seconds 5))) pure $ baseEnv - { AWS.envLogger = awsLogger g, - AWS.envRegion = opts ^. optAws . awsRegion, - AWS.envRetryCheck = retryCheck, - AWS.envManager = mgr + { AWS.logger = awsLogger g, + AWS.region = opts ^. optAws . awsRegion, + AWS.retryCheck = retryCheck, + AWS.manager = mgr } awsLogger g l = Logger.log g (mapLevel l) . Logger.msg . toLazyByteString @@ -240,8 +241,8 @@ updateEndpoint us tk arn = do Right _ -> pure () Left x@(AWS.ServiceError e) | is "SNS" 400 x - && AWS.newErrorCode "InvalidParameter" == e ^. serviceCode - && isMetadataLengthError (e ^. serviceMessage) -> + && AWS.newErrorCode "InvalidParameter" == e ^. serviceError_code + && isMetadataLengthError (e ^. serviceError_message) -> throwM $ InvalidCustomData arn Left x -> throwM $ @@ -303,16 +304,16 @@ createEndpoint u tr arnEnv app token = do Nothing -> throwM NoEndpointArn Just s -> Right <$> readArn s Left x@(AWS.ServiceError e) - | is "SNS" 400 x && AWS.newErrorCode "InvalidParameter" == e ^. serviceCode, - Just ep <- parseExistsError (e ^. serviceMessage) -> + | is "SNS" 400 x && AWS.newErrorCode "InvalidParameter" == e ^. serviceError_code, + Just ep <- parseExistsError (e ^. serviceError_message) -> pure (Left (EndpointInUse ep)) | is "SNS" 400 x - && AWS.newErrorCode "InvalidParameter" == e ^. serviceCode - && isLengthError (e ^. serviceMessage) -> + && AWS.newErrorCode "InvalidParameter" == e ^. serviceError_code + && isLengthError (e ^. serviceError_message) -> pure (Left (TokenTooLong $ tokenLength token)) | is "SNS" 400 x - && AWS.newErrorCode "InvalidParameter" == e ^. serviceCode - && isTokenError (e ^. serviceMessage) -> do + && AWS.newErrorCode "InvalidParameter" == e ^. serviceError_code + && isTokenError (e ^. serviceError_message) -> do debug $ msg @Text "InvalidParameter: InvalidToken" . field "response" (show x) @@ -409,19 +410,19 @@ publish arn txt attrs = do case res of Right _ -> pure (Right ()) Left x@(AWS.ServiceError e) - | is "SNS" 400 x && AWS.newErrorCode "EndpointDisabled" == e ^. serviceCode -> + | is "SNS" 400 x && AWS.newErrorCode "EndpointDisabled" == e ^. serviceError_code -> pure (Left (EndpointDisabled arn)) | is "SNS" 400 x - && AWS.newErrorCode "InvalidParameter" == e ^. serviceCode - && isProtocolSizeError (e ^. serviceMessage) -> + && AWS.newErrorCode "InvalidParameter" == e ^. serviceError_code + && isProtocolSizeError (e ^. serviceError_message) -> pure (Left (PayloadTooLarge arn)) | is "SNS" 400 x - && AWS.newErrorCode "InvalidParameter" == e ^. serviceCode - && isSnsSizeError (e ^. serviceMessage) -> + && AWS.newErrorCode "InvalidParameter" == e ^. serviceError_code + && isSnsSizeError (e ^. serviceError_message) -> pure (Left (PayloadTooLarge arn)) | is "SNS" 400 x - && AWS.newErrorCode "InvalidParameter" == e ^. serviceCode - && isArnError (e ^. serviceMessage) -> + && AWS.newErrorCode "InvalidParameter" == e ^. serviceError_code + && isArnError (e ^. serviceError_message) -> pure (Left (InvalidEndpoint arn)) Left x -> throwM (GeneralError x) where @@ -488,7 +489,7 @@ send :: AWSRequest r => AWS.Env -> r -> Amazon (AWSResponse r) send env r = either (throwM . GeneralError) pure =<< sendCatch env r is :: AWS.Abbrev -> Int -> AWS.Error -> Bool -is srv s (AWS.ServiceError e) = srv == e ^. serviceAbbrev && s == statusCode (e ^. serviceStatus) +is srv s (AWS.ServiceError e) = srv == e ^. serviceError_abbrev && s == statusCode (e ^. serviceError_status) is _ _ _ = False isTimeout :: MonadIO m => Either AWS.Error a -> m Bool diff --git a/services/gundeck/src/Gundeck/Options.hs b/services/gundeck/src/Gundeck/Options.hs index 3fa6d2044a..95eb235f41 100644 --- a/services/gundeck/src/Gundeck/Options.hs +++ b/services/gundeck/src/Gundeck/Options.hs @@ -28,6 +28,7 @@ import Imports import System.Logger.Extended (Level, LogFormat) import Util.Options import Util.Options.Common +import Wire.API.Routes.Version newtype NotificationTTL = NotificationTTL {notificationTTLSeconds :: Word32} @@ -73,7 +74,8 @@ data Settings = Settings -- ensures that there is only one request every 20 seconds. -- However, that parameter is not honoured when using fake-sqs -- (where throttling can thus make sense) - _setSqsThrottleMillis :: !(Maybe Int) + _setSqsThrottleMillis :: !(Maybe Int), + _setDisabledAPIVersions :: !(Maybe (Set Version)) } deriving (Show, Generic) diff --git a/services/gundeck/src/Gundeck/Run.hs b/services/gundeck/src/Gundeck/Run.hs index 012c7802d0..c8fc2eb908 100644 --- a/services/gundeck/src/Gundeck/Run.hs +++ b/services/gundeck/src/Gundeck/Run.hs @@ -80,7 +80,7 @@ run o = do where middleware :: Env -> Wai.Middleware middleware e = - versionMiddleware + versionMiddleware (fold (o ^. optSettings . setDisabledAPIVersions)) . waiPrometheusMiddleware sitemap . GZip.gunzip . GZip.gzip GZip.def diff --git a/services/proxy/src/Proxy/API.hs b/services/proxy/src/Proxy/API.hs index 80c3f943f6..4cb86ef558 100644 --- a/services/proxy/src/Proxy/API.hs +++ b/services/proxy/src/Proxy/API.hs @@ -33,6 +33,10 @@ sitemap e = do Public.sitemap e routesInternal +-- | IF YOU MODIFY THIS, BE AWARE OF: +-- +-- >>> /libs/wire-api/src/Wire/API/Routes/Public/Proxy.hs +-- >>> https://wearezeta.atlassian.net/browse/SQSERVICES-1647 routesInternal :: Routes a Proxy () routesInternal = do head "/i/status" (continue $ const (pure empty)) true diff --git a/services/proxy/src/Proxy/API/Public.hs b/services/proxy/src/Proxy/API/Public.hs index 4abd83367a..6c12b314e4 100644 --- a/services/proxy/src/Proxy/API/Public.hs +++ b/services/proxy/src/Proxy/API/Public.hs @@ -47,6 +47,10 @@ import Proxy.Proxy import System.Logger.Class hiding (Error, info, render) import qualified System.Logger.Class as Logger +-- | IF YOU MODIFY THIS, BE AWARE OF: +-- +-- >>> /libs/wire-api/src/Wire/API/Routes/Public/Proxy.hs +-- >>> https://wearezeta.atlassian.net/browse/SQSERVICES-1647 sitemap :: Env -> Routes a Proxy () sitemap e = do get diff --git a/services/proxy/src/Proxy/Options.hs b/services/proxy/src/Proxy/Options.hs index 2397fd0438..58259956ba 100644 --- a/services/proxy/src/Proxy/Options.hs +++ b/services/proxy/src/Proxy/Options.hs @@ -28,6 +28,7 @@ module Proxy.Options logNetStrings, logFormat, mockOpts, + disabledAPIVersions, ) where @@ -36,6 +37,7 @@ import Data.Aeson import Data.Aeson.TH import Imports import System.Logger.Extended (Level (Debug), LogFormat) +import Wire.API.Routes.Version data Opts = Opts { -- | Host to listen on @@ -54,7 +56,8 @@ data Opts = Opts -- | Use netstrings encoding _logNetStrings :: !(Maybe (Last Bool)), -- | choose Encoding - _logFormat :: !(Maybe (Last LogFormat)) + _logFormat :: !(Maybe (Last LogFormat)), + _disabledAPIVersions :: !(Maybe (Set Version)) } deriving (Show, Generic) @@ -73,5 +76,6 @@ mockOpts secrets = _maxConns = 0, _logLevel = Debug, _logNetStrings = pure $ pure $ True, - _logFormat = mempty + _logFormat = mempty, + _disabledAPIVersions = mempty } diff --git a/services/proxy/src/Proxy/Run.hs b/services/proxy/src/Proxy/Run.hs index 69b209b0bb..1eb6f1c1e9 100644 --- a/services/proxy/src/Proxy/Run.hs +++ b/services/proxy/src/Proxy/Run.hs @@ -40,7 +40,7 @@ run o = do let rtree = compile (sitemap e) let app r k = runProxy e r (route rtree r k) let middleware = - versionMiddleware + versionMiddleware (fold (o ^. disabledAPIVersions)) . waiPrometheusMiddleware (sitemap e) . catchErrors (e ^. applog) [Right m] runSettingsWithShutdown s (middleware app) Nothing `finally` destroyEnv e diff --git a/services/proxy/test/scripts/.gitignore b/services/proxy/test/scripts/.gitignore new file mode 100644 index 0000000000..577f4d21fe --- /dev/null +++ b/services/proxy/test/scripts/.gitignore @@ -0,0 +1 @@ +/proxy-test diff --git a/services/proxy/test/scripts/proxy-test.sh b/services/proxy/test/scripts/proxy-test.sh new file mode 100755 index 0000000000..3f8ee9ed3b --- /dev/null +++ b/services/proxy/test/scripts/proxy-test.sh @@ -0,0 +1,85 @@ +#!/bin/bash + +set -o pipefail +set -o errexit + +cd "$(dirname "${BASH_SOURCE[0]}")" + +echo " +run this script to test proxy on any running wire-server +instance. this replaces more thorough integration tests, since +integration tests for just proxy without the proxied services +installed is hard and inadequate. + +WIRE_BACKEND: $WIRE_BACKEND +WIRE_ADMIN: $WIRE_ADMIN +WIRE_PASSWD: +" + +set -x + +fail() { + printf "\e[31;1m%s\e[0m\n" "$*" >&2 + exit 1 +} + +check_login() { + echo "checking login..." + status_code=$(curl --write-out '%{http_code}' --silent --output /dev/null -I -X GET --header "Authorization: Bearer $BEARER" "$WIRE_BACKEND"/self) + + if [[ "$status_code" == 200 ]]; then + echo "login: OK" + else + echo "status code: $status_code" + echo "this may be because your password contains special characters that would need to be quoted better in this script." + fail "login: FAIL" + fi +} + +check_url() { + export testnum=$1 + export verb=$2 + export uri=$3 + export status_want=$4 + + status_have=$(curl --write-out '%{http_code}' --silent --output "./proxy-test/$testnum.txt" -I -X "$verb" \ + --header "Authorization: Bearer $BEARER" \ + --header "Content-Type: application/json" \ + "$uri") + + curl -X "$verb" \ + --header "Authorization: Bearer $BEARER" \ + --header "Content-Type: application/json" \ + "$uri" > ./proxy-test/"$testnum".json + + if [[ "$status_have" == "$status_want" ]]; then + echo "proxy $uri: OK" + file ./proxy-test/"$testnum".json | grep -q '\(JSON\|PNG\)' || ( echo "received something weird!"; exit 1 ) + else + echo "expected status code: $status_want, but got $status_have" + fail "proxy $uri: FAIL (check ./proxy-test/$testnum.json for details)" + fi +} + +get_access_token() { + BEARER=$(curl -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' \ + -d '{"email":"'"$WIRE_ADMIN"'","password":"'"$WIRE_PASSWD"'"}' \ + "$WIRE_BACKEND"/login'?persist=false' \ + | jq -r .access_token) +} + + +mkdir -p ./proxy-test + +get_access_token +check_login + +check_url "1" "GET" "$WIRE_BACKEND"/api/swagger.json 200 +check_url "2" "GET" "$WIRE_BACKEND"'/v2/proxy/giphy/v1/gifs/search?limit=100&offset=0&q=kitty' 200 +check_url "3" "GET" "$WIRE_BACKEND"'/v2/proxy/youtube/v3/search' 200 +check_url "4" "GET" "$WIRE_BACKEND"'/v2/proxy/googlemaps/api/staticmap?center=Berlin&zoom=14&size=400x400' 200 +check_url "5" "GET" "$WIRE_BACKEND"'/v2/proxy/googlemaps/maps/api/geocode/json?place_id=ChIJeRpOeF67j4AR9ydy_PIzPuM' 200 + +# manually: +# curl -XGET http://localhost:8080/i/status # from proxy pod +# curl -XHEAD http://localhost:8080/i/status # from proxy pod diff --git a/services/spar/src/Spar/API.hs b/services/spar/src/Spar/API.hs index 0a81b08c3a..e0a6cd861b 100644 --- a/services/spar/src/Spar/API.hs +++ b/services/spar/src/Spar/API.hs @@ -69,6 +69,7 @@ import Spar.App import Spar.CanonicalInterpreter import Spar.Error import qualified Spar.Intra.BrigApp as Brig +import Spar.Options import Spar.Orphans () import Spar.Scim import Spar.Sem.AReqIDStore (AReqIDStore) diff --git a/services/spar/src/Spar/App.hs b/services/spar/src/Spar/App.hs index 429eac99c0..e92f3f41a9 100644 --- a/services/spar/src/Spar/App.hs +++ b/services/spar/src/Spar/App.hs @@ -64,6 +64,7 @@ import Servant import qualified Servant.Multipart as Multipart import Spar.Error hiding (sparToServerErrorWithLogging) import qualified Spar.Intra.BrigApp as Intra +import Spar.Options import Spar.Orphans () import Spar.Sem.AReqIDStore (AReqIDStore) import Spar.Sem.BrigAccess (BrigAccess) diff --git a/services/spar/src/Spar/CanonicalInterpreter.hs b/services/spar/src/Spar/CanonicalInterpreter.hs index 02475109a2..75170b8e2d 100644 --- a/services/spar/src/Spar/CanonicalInterpreter.hs +++ b/services/spar/src/Spar/CanonicalInterpreter.hs @@ -33,6 +33,7 @@ import Polysemy.Input (Input, runInputConst) import Servant import Spar.App hiding (sparToServerErrorWithLogging) import Spar.Error +import Spar.Options import Spar.Orphans () import Spar.Sem.AReqIDStore (AReqIDStore) import Spar.Sem.AReqIDStore.Cassandra (aReqIDStoreToCassandra) diff --git a/services/spar/src/Spar/Data.hs b/services/spar/src/Spar/Data.hs index 3c53a9d9aa..375533c2f6 100644 --- a/services/spar/src/Spar/Data.hs +++ b/services/spar/src/Spar/Data.hs @@ -44,6 +44,7 @@ import Imports import SAML2.Util (renderURI) import qualified SAML2.WebSSO as SAML import qualified SAML2.WebSSO.Types.Email as SAMLEmail +import Spar.Options import Wire.API.User.Saml -- | A lower bound: @schemaVersion <= whatWeFoundOnCassandra@, not @==@. diff --git a/services/spar/src/Spar/Options.hs b/services/spar/src/Spar/Options.hs index de3eed04fb..ca2da7a3b9 100644 --- a/services/spar/src/Spar/Options.hs +++ b/services/spar/src/Spar/Options.hs @@ -19,26 +19,70 @@ -- with this program. If not, see . -- | Reading the Spar config. --- --- The config type itself, 'Opts', is defined in "Spar.Types". module Spar.Options - ( getOpts, + ( Opts' (..), + Opts, + DerivedOpts (..), + getOpts, deriveOpts, readOptsFile, + maxttlAuthreqDiffTime, ) where import Control.Exception import Control.Lens +import Control.Monad.Except +import Data.Aeson hiding (fieldLabelModifier) import qualified Data.ByteString as SBS +import Data.String.Conversions +import Data.Time import qualified Data.Yaml as Yaml import Imports import Options.Applicative +import SAML2.WebSSO import qualified SAML2.WebSSO as SAML +import System.Logger.Extended (LogFormat) import Text.Ascii (ascii) -import URI.ByteString as URI +import URI.ByteString +import Util.Options +import Wire.API.Routes.Version +import Wire.API.User.Orphans () import Wire.API.User.Saml +type Opts = Opts' DerivedOpts + +data Opts' a = Opts + { saml :: !SAML.Config, + brig :: !Endpoint, + galley :: !Endpoint, + cassandra :: !CassandraOpts, + maxttlAuthreq :: !(TTL "authreq"), + maxttlAuthresp :: !(TTL "authresp"), + -- | The maximum number of SCIM tokens that we will allow teams to have. + maxScimTokens :: !Int, + -- | The maximum size of rich info. Should be in sync with 'Brig.Types.richInfoLimit'. + richInfoLimit :: !Int, + -- | Wire/AWS specific; optional; used to discover Cassandra instance + -- IPs using describe-instances. + discoUrl :: !(Maybe Text), + logNetStrings :: !(Maybe (Last Bool)), + logFormat :: !(Maybe (Last LogFormat)), + disabledAPIVersions :: !(Maybe (Set Version)), + derivedOpts :: !a + } + deriving (Functor, Show, Generic) + +instance FromJSON (Opts' (Maybe ())) + +data DerivedOpts = DerivedOpts + { derivedOptsScimBaseURI :: !URI + } + deriving (Show, Generic) + +maxttlAuthreqDiffTime :: Opts -> NominalDiffTime +maxttlAuthreqDiffTime = ttlToNominalDiffTime . maxttlAuthreq + type OptsRaw = Opts' (Maybe ()) -- | Throws an exception if no config file is found. diff --git a/services/spar/src/Spar/Run.hs b/services/spar/src/Spar/Run.hs index 80e7013291..89b2ff6aec 100644 --- a/services/spar/src/Spar/Run.hs +++ b/services/spar/src/Spar/Run.hs @@ -49,12 +49,12 @@ import Spar.API (API, app) import Spar.App import qualified Spar.Data as Data import Spar.Data.Instances () +import Spar.Options import Spar.Orphans () import System.Logger.Class (Logger) import qualified System.Logger.Extended as Log import Util.Options (casEndpoint, casFilterNodesByDatacentre, casKeyspace, epHost, epPort) import Wire.API.Routes.Version.Wai -import Wire.API.User.Saml as Types import Wire.Sem.Logger.TinyLog ---------------------------------------------------------------------- @@ -62,12 +62,12 @@ import Wire.Sem.Logger.TinyLog initCassandra :: Opts -> Logger -> IO ClientState initCassandra opts lgr = do - let cassOpts = Types.cassandra opts + let cassOpts = cassandra opts connectString <- maybe (Cas.initialContactsPlain (cassOpts ^. casEndpoint . epHost)) (Cas.initialContactsDisco "cassandra_spar" . cs) - (Types.discoUrl opts) + (discoUrl opts) cas <- Cas.init $ Cas.defSettings @@ -115,7 +115,7 @@ mkApp sparCtxOpts = do . Bilge.port (sparCtxOpts ^. to galley . epPort) $ Bilge.empty let wrappedApp = - versionMiddleware + versionMiddleware (fold (disabledAPIVersions sparCtxOpts)) . WU.heavyDebugLogging heavyLogOnly logLevel sparCtxLogger . servantPrometheusMiddleware (Proxy @API) . WU.catchErrors sparCtxLogger [] diff --git a/services/spar/src/Spar/Scim.hs b/services/spar/src/Spar/Scim.hs index 313987866d..72c3afa041 100644 --- a/services/spar/src/Spar/Scim.hs +++ b/services/spar/src/Spar/Scim.hs @@ -77,6 +77,7 @@ import Spar.Error ( SparCustomError (SparScimError), SparError, ) +import Spar.Options import Spar.Scim.Auth import Spar.Scim.User import Spar.Sem.BrigAccess (BrigAccess) @@ -96,7 +97,6 @@ import qualified Web.Scim.Schema.Error as Scim import qualified Web.Scim.Schema.Schema as Scim.Schema import qualified Web.Scim.Server as Scim import Wire.API.Routes.Public.Spar -import Wire.API.User.Saml (Opts) import Wire.API.User.Scim import Wire.Sem.Logger (Logger) import Wire.Sem.Now (Now) diff --git a/services/spar/src/Spar/Scim/Auth.hs b/services/spar/src/Spar/Scim/Auth.hs index 0a75c49fb2..12081181f9 100644 --- a/services/spar/src/Spar/Scim/Auth.hs +++ b/services/spar/src/Spar/Scim/Auth.hs @@ -51,6 +51,7 @@ import Servant (NoContent (NoContent), ServerT, (:<|>) ((:<|>))) import Spar.App (throwSparSem) import qualified Spar.Error as E import qualified Spar.Intra.BrigApp as Intra.Brig +import Spar.Options import Spar.Sem.BrigAccess (BrigAccess) import qualified Spar.Sem.BrigAccess as BrigAccess import Spar.Sem.GalleyAccess (GalleyAccess) @@ -63,7 +64,6 @@ import qualified Web.Scim.Handler as Scim import qualified Web.Scim.Schema.Error as Scim import Wire.API.Routes.Public.Spar (APIScimToken) import Wire.API.User as User -import Wire.API.User.Saml (Opts, maxScimTokens) import Wire.API.User.Scim as Api import Wire.Sem.Now (Now) import qualified Wire.Sem.Now as Now diff --git a/services/spar/src/Spar/Scim/User.hs b/services/spar/src/Spar/Scim/User.hs index c84279743d..2215547ca4 100644 --- a/services/spar/src/Spar/Scim/User.hs +++ b/services/spar/src/Spar/Scim/User.hs @@ -69,6 +69,7 @@ import qualified SAML2.WebSSO as SAML import Spar.App (getUserByUrefUnsafe, getUserIdByScimExternalId) import qualified Spar.App import qualified Spar.Intra.BrigApp as Brig +import Spar.Options import Spar.Scim.Auth () import Spar.Scim.Types (normalizeLikeStored) import qualified Spar.Scim.Types as ST @@ -102,7 +103,6 @@ import Wire.API.Team.Role import Wire.API.User import Wire.API.User.IdentityProvider (IdP) import qualified Wire.API.User.RichInfo as RI -import Wire.API.User.Saml (Opts, derivedOpts, derivedOptsScimBaseURI, richInfoLimit) import Wire.API.User.Scim (ScimTokenInfo (..)) import qualified Wire.API.User.Scim as ST import Wire.Sem.Logger (Logger) diff --git a/services/spar/src/Spar/Sem/AReqIDStore/Cassandra.hs b/services/spar/src/Spar/Sem/AReqIDStore/Cassandra.hs index a5716e0ca3..8c04ed8692 100644 --- a/services/spar/src/Spar/Sem/AReqIDStore/Cassandra.hs +++ b/services/spar/src/Spar/Sem/AReqIDStore/Cassandra.hs @@ -30,6 +30,7 @@ import Polysemy.Input (Input, input) import qualified SAML2.WebSSO as SAML import qualified Spar.Data as Data import Spar.Data.Instances () +import Spar.Options import Spar.Sem.AReqIDStore import Wire.API.User.Saml import Wire.Sem.Now (Now) diff --git a/services/spar/src/Spar/Sem/AssIDStore/Cassandra.hs b/services/spar/src/Spar/Sem/AssIDStore/Cassandra.hs index ab4a188c55..1465bf8aaf 100644 --- a/services/spar/src/Spar/Sem/AssIDStore/Cassandra.hs +++ b/services/spar/src/Spar/Sem/AssIDStore/Cassandra.hs @@ -30,6 +30,7 @@ import Polysemy.Input import qualified SAML2.WebSSO as SAML import qualified Spar.Data as Data import Spar.Data.Instances () +import Spar.Options import Spar.Sem.AssIDStore import Wire.API.User.Saml import Wire.Sem.Now (Now) diff --git a/services/spar/src/Spar/Sem/SAML2/Library.hs b/services/spar/src/Spar/Sem/SAML2/Library.hs index bb434989e3..993b0fa407 100644 --- a/services/spar/src/Spar/Sem/SAML2/Library.hs +++ b/services/spar/src/Spar/Sem/SAML2/Library.hs @@ -34,6 +34,7 @@ import Polysemy.Internal.Tactics import SAML2.WebSSO hiding (Error) import qualified SAML2.WebSSO as SAML hiding (Error) import Spar.Error (SparCustomError (..), SparError) +import Spar.Options import Spar.Sem.AReqIDStore (AReqIDStore) import qualified Spar.Sem.AReqIDStore as AReqIDStore import Spar.Sem.AssIDStore (AssIDStore) @@ -42,7 +43,6 @@ import Spar.Sem.IdPConfigStore (IdPConfigStore) import qualified Spar.Sem.IdPConfigStore as IdPConfigStore import Spar.Sem.SAML2 import Wire.API.User.IdentityProvider (WireIdP) -import Wire.API.User.Saml import Wire.Sem.Logger (Logger) import qualified Wire.Sem.Logger as Logger diff --git a/services/spar/test-integration/Test/Spar/APISpec.hs b/services/spar/test-integration/Test/Spar/APISpec.hs index c2e5175354..de1ccc922a 100644 --- a/services/spar/test-integration/Test/Spar/APISpec.hs +++ b/services/spar/test-integration/Test/Spar/APISpec.hs @@ -75,6 +75,7 @@ import SAML2.WebSSO.Test.Lenses import SAML2.WebSSO.Test.MockResponse import SAML2.WebSSO.Test.Util import qualified Spar.Intra.BrigApp as Intra +import Spar.Options import qualified Spar.Sem.AReqIDStore as AReqIDStore import qualified Spar.Sem.BrigAccess as BrigAccess import qualified Spar.Sem.IdPConfigStore as IdPEffect @@ -94,7 +95,6 @@ import Wire.API.User import Wire.API.User.Client import Wire.API.User.Client.Prekey import Wire.API.User.IdentityProvider -import qualified Wire.API.User.Saml as WireAPI (saml) import Wire.API.User.Scim spec :: SpecWith TestEnv @@ -151,7 +151,7 @@ specMetadata = do mkit mdpath finalizepath = do it ("metadata (" <> mdpath <> ")") $ do env <- ask - let sparHost = env ^. teOpts . to WireAPI.saml . SAML.cfgSPSsoURI . to (cs . SAML.renderURI) + let sparHost = env ^. teOpts . to saml . SAML.cfgSPSsoURI . to (cs . SAML.renderURI) fragments = [ "md:SPSSODescriptor", "validUntil", diff --git a/services/spar/test-integration/Test/Spar/DataSpec.hs b/services/spar/test-integration/Test/Spar/DataSpec.hs index 494035c42f..46f7fe88e6 100644 --- a/services/spar/test-integration/Test/Spar/DataSpec.hs +++ b/services/spar/test-integration/Test/Spar/DataSpec.hs @@ -33,6 +33,7 @@ import SAML2.WebSSO as SAML import Spar.App as App import Spar.Error (IdpDbError (IdpNotFound), SparCustomError (IdpDbError)) import Spar.Intra.BrigApp (veidFromUserSSOId) +import Spar.Options import qualified Spar.Sem.AReqIDStore as AReqIDStore import qualified Spar.Sem.AssIDStore as AssIDStore import qualified Spar.Sem.IdPConfigStore as IdPEffect diff --git a/services/spar/test-integration/Test/Spar/Scim/UserSpec.hs b/services/spar/test-integration/Test/Spar/Scim/UserSpec.hs index 3429af354b..342fbbb5cc 100644 --- a/services/spar/test-integration/Test/Spar/Scim/UserSpec.hs +++ b/services/spar/test-integration/Test/Spar/Scim/UserSpec.hs @@ -62,6 +62,7 @@ import qualified SAML2.WebSSO.Test.MockResponse as SAML import SAML2.WebSSO.Test.Util.TestSP (makeSampleIdPMetadata) import qualified SAML2.WebSSO.Test.Util.Types as SAML import qualified Spar.Intra.BrigApp as Intra +import Spar.Options import Spar.Scim import Spar.Scim.Types (normalizeLikeStored) import qualified Spar.Scim.User as SU @@ -89,7 +90,6 @@ import Wire.API.User hiding (scimExternalId) import Wire.API.User.IdentityProvider (IdP) import qualified Wire.API.User.IdentityProvider as User import Wire.API.User.RichInfo -import qualified Wire.API.User.Saml as Spar.Types import qualified Wire.API.User.Scim as Spar.Types import qualified Wire.API.User.Search as Search @@ -1585,7 +1585,7 @@ testScimSideIsUpdated = do liftIO $ updatedUser `shouldBe` storedUser' -- Check that the updated user also matches the data that we sent with -- 'updateUser' - richInfoLimit <- view (teOpts . to Spar.Types.richInfoLimit) + richInfoLimit <- view (teOpts . to richInfoLimit) liftIO $ do Right (Scim.value (Scim.thing storedUser')) `shouldBe` (whatSparReturnsFor idp richInfoLimit (setPreferredLanguage defLang user') <&> setDefaultRoleIfEmpty) Scim.id (Scim.thing storedUser') `shouldBe` Scim.id (Scim.thing storedUser) @@ -1641,7 +1641,7 @@ testUpdateSameHandle = do storedUser' <- getUser tok userid liftIO $ updatedUser `shouldBe` storedUser' -- Check that the updated user also matches the data that we sent with 'updateUser' - richInfoLimit <- view (teOpts . to Spar.Types.richInfoLimit) + richInfoLimit <- view (teOpts . to richInfoLimit) liftIO $ do Right (Scim.value (Scim.thing storedUser')) `shouldBe` (whatSparReturnsFor idp richInfoLimit (setPreferredLanguage defLang user') <&> setDefaultRoleIfEmpty) Scim.id (Scim.thing storedUser') `shouldBe` Scim.id (Scim.thing storedUser) diff --git a/services/spar/test-integration/Util/Core.hs b/services/spar/test-integration/Util/Core.hs index 50505ae997..b28898d87a 100644 --- a/services/spar/test-integration/Util/Core.hs +++ b/services/spar/test-integration/Util/Core.hs @@ -183,7 +183,7 @@ import qualified Spar.App as Spar import Spar.CanonicalInterpreter import Spar.Error (SparError) import qualified Spar.Intra.BrigApp as Intra -import qualified Spar.Options +import Spar.Options import Spar.Run import qualified Spar.Sem.IdPConfigStore as IdPConfigStore import qualified Spar.Sem.SAMLUserStore as SAMLUserStore @@ -215,7 +215,6 @@ import qualified Wire.API.User as User import Wire.API.User.Activation import Wire.API.User.Auth hiding (Cookie) import Wire.API.User.IdentityProvider -import Wire.API.User.Saml import Wire.API.User.Scim (runValidExternalIdEither) import Wire.Sem.Logger.TinyLog diff --git a/services/spar/test-integration/Util/Types.hs b/services/spar/test-integration/Util/Types.hs index 04bbfa2978..ba3abc8615 100644 --- a/services/spar/test-integration/Util/Types.hs +++ b/services/spar/test-integration/Util/Types.hs @@ -54,10 +54,10 @@ import Imports import SAML2.WebSSO.Types.TH (deriveJSONOptions) import Spar.API () import qualified Spar.App as Spar +import Spar.Options import Test.Hspec (pendingWith) import Util.Options import Wire.API.User.IdentityProvider (WireIdPAPIVersion) -import Wire.API.User.Saml type BrigReq = Request -> Request diff --git a/tools/fedcalls/.ormolu b/tools/fedcalls/.ormolu new file mode 120000 index 0000000000..157b212d7c --- /dev/null +++ b/tools/fedcalls/.ormolu @@ -0,0 +1 @@ +../../.ormolu \ No newline at end of file diff --git a/tools/fedcalls/README.md b/tools/fedcalls/README.md new file mode 100644 index 0000000000..e43f95e14a --- /dev/null +++ b/tools/fedcalls/README.md @@ -0,0 +1,31 @@ +our swaggger docs contain information about which end-points call +which federation end-points internally. this command line tool +extracts that information from the swagger json and converts it into +two files: dot (for feeding into graphviz), and csv. + +### try it out + +``` +cabal run fedcalls +ls wire-fedcalls.* # these names are hard-coded (sorry!) +dot -Tpng wire-fedcalls.dot > wire-fedcalls.png +``` + +`dot` layouts only work for small data sets (at least without tweaking). for a better one paste into [sketchvis](https://sketchviz.com/new). + +### links + +- `./example.png` +- https://sketchviz.com/new +- https://graphviz.org/doc/info/lang.html +- `/libs/wire-api/src/Wire/API/MakesFederatedCall.hs` + +### swagger-ui + +you can get the same data for the public API in the swagger-ui output. just load the page, open your javascript console, and type: + +``` +window.ui.getConfigs().showExtensions = true +``` + +then drop down on things like normal, and you'll see federated calls. diff --git a/tools/fedcalls/default.nix b/tools/fedcalls/default.nix new file mode 100644 index 0000000000..1fa52660c6 --- /dev/null +++ b/tools/fedcalls/default.nix @@ -0,0 +1,38 @@ +# WARNING: GENERATED FILE, DO NOT EDIT. +# This file is generated by running hack/bin/generate-local-nix-packages.sh and +# must be regenerated whenever local packages are added or removed, or +# dependencies are added or removed. +{ mkDerivation +, aeson +, base +, containers +, gitignoreSource +, imports +, insert-ordered-containers +, language-dot +, lib +, swagger2 +, text +, wire-api +}: +mkDerivation { + pname = "fedcalls"; + version = "1.0.0"; + src = gitignoreSource ./.; + isLibrary = false; + isExecutable = true; + executableHaskellDepends = [ + aeson + base + containers + imports + insert-ordered-containers + language-dot + swagger2 + text + wire-api + ]; + description = "Generate a dot file from swagger docs representing calls to federated instances"; + license = lib.licenses.agpl3Only; + mainProgram = "fedcalls"; +} diff --git a/tools/fedcalls/example.png b/tools/fedcalls/example.png new file mode 100644 index 0000000000..26bc63134f Binary files /dev/null and b/tools/fedcalls/example.png differ diff --git a/tools/fedcalls/fedcalls.cabal b/tools/fedcalls/fedcalls.cabal new file mode 100644 index 0000000000..2e42d6f9bb --- /dev/null +++ b/tools/fedcalls/fedcalls.cabal @@ -0,0 +1,74 @@ +cabal-version: 1.12 +name: fedcalls +version: 1.0.0 +synopsis: + Generate a dot file from swagger docs representing calls to federated instances. + +category: Network +author: Wire Swiss GmbH +maintainer: Wire Swiss GmbH +copyright: (c) 2020 Wire Swiss GmbH +license: AGPL-3 +build-type: Simple + +executable fedcalls + main-is: Main.hs + hs-source-dirs: src + default-extensions: + NoImplicitPrelude + AllowAmbiguousTypes + BangPatterns + ConstraintKinds + DataKinds + DefaultSignatures + DeriveFunctor + DeriveGeneric + DeriveLift + DeriveTraversable + DerivingStrategies + DerivingVia + EmptyCase + FlexibleContexts + FlexibleInstances + FunctionalDependencies + GADTs + InstanceSigs + KindSignatures + LambdaCase + MultiParamTypeClasses + MultiWayIf + NamedFieldPuns + OverloadedStrings + PackageImports + PatternSynonyms + PolyKinds + QuasiQuotes + RankNTypes + ScopedTypeVariables + StandaloneDeriving + TupleSections + TypeApplications + TypeFamilies + TypeFamilyDependencies + TypeOperators + UndecidableInstances + ViewPatterns + + ghc-options: + -O2 -Wall -Wincomplete-uni-patterns -Wincomplete-record-updates + -Wpartial-fields -fwarn-tabs -optP-Wno-nonportable-include-path + -funbox-strict-fields -threaded -with-rtsopts=-N -with-rtsopts=-T + -rtsopts + + build-depends: + aeson + , base + , containers + , imports + , insert-ordered-containers + , language-dot + , swagger2 + , text + , wire-api + + default-language: Haskell2010 diff --git a/tools/fedcalls/src/Main.hs b/tools/fedcalls/src/Main.hs new file mode 100644 index 0000000000..7a717e75ef --- /dev/null +++ b/tools/fedcalls/src/Main.hs @@ -0,0 +1,220 @@ +{-# LANGUAGE OverloadedStrings #-} + +-- This file is part of the Wire Server implementation. +-- +-- Copyright (C) 2022 Wire Swiss GmbH +-- +-- This program is free software: you can redistribute it and/or modify it under +-- the terms of the GNU Affero General Public License as published by the Free +-- Software Foundation, either version 3 of the License, or (at your option) any +-- later version. +-- +-- This program is distributed in the hope that it will be useful, but WITHOUT +-- ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS +-- FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more +-- details. +-- +-- You should have received a copy of the GNU Affero General Public License along +-- with this program. If not, see . + +module Main + ( main, + ) +where + +import Control.Exception (assert) +import Data.Aeson as A +import qualified Data.Aeson.Types as A +import qualified Data.HashMap.Strict.InsOrd as HM +import qualified Data.Map as M +import Data.Swagger + ( PathItem, + Swagger, + _operationExtensions, + _pathItemDelete, + _pathItemGet, + _pathItemHead, + _pathItemOptions, + _pathItemPatch, + _pathItemPost, + _pathItemPut, + _swaggerPaths, + ) +import Imports +import Language.Dot as D +import qualified Wire.API.Routes.Internal.Brig as BrigIRoutes +import qualified Wire.API.Routes.Public.Brig as BrigRoutes +import qualified Wire.API.Routes.Public.Cannon as CannonRoutes +import qualified Wire.API.Routes.Public.Cargohold as CargoholdRoutes +import qualified Wire.API.Routes.Public.Galley as GalleyRoutes +import qualified Wire.API.Routes.Public.Gundeck as GundeckRoutes +import qualified Wire.API.Routes.Public.Proxy as ProxyRoutes +-- import qualified Wire.API.Routes.Internal.Cannon as CannonIRoutes +-- import qualified Wire.API.Routes.Internal.Cargohold as CargoholdIRoutes +-- import qualified Wire.API.Routes.Internal.LegalHold as LegalHoldIRoutes +import qualified Wire.API.Routes.Public.Spar as SparRoutes + +------------------------------ + +main :: IO () +main = do + writeFile "wire-fedcalls.dot" . D.renderDot . mkDotGraph $ calls + writeFile "wire-fedcalls.csv" . toCsv $ calls + +calls :: [MakesCallTo] +calls = assert (calls' == nub calls') calls' + where + calls' = mconcat $ parse <$> swaggers + +swaggers :: [Swagger] +swaggers = + [ -- TODO: introduce allSwaggerDocs in wire-api that collects these for all + -- services, use that in /services/brig/src/Brig/API/Public.hs instead of + -- doing it by hand. + + BrigRoutes.brigSwagger, -- TODO: s/brigSwagger/swaggerDoc/ like everybody else! + CannonRoutes.swaggerDoc, + CargoholdRoutes.swaggerDoc, + GalleyRoutes.swaggerDoc, + GundeckRoutes.swaggerDoc, + ProxyRoutes.swaggerDoc, + SparRoutes.swaggerDoc, + -- TODO: collect all internal apis somewhere else (brig?), and expose them + -- via an internal swagger api end-point. + + BrigIRoutes.swaggerDoc + -- CannonIRoutes.swaggerDoc, + -- CargoholdIRoutes.swaggerDoc, + -- LegalHoldIRoutes.swaggerDoc + ] + +------------------------------ + +data MakesCallTo = MakesCallTo + { -- who is calling? + sourcePath :: String, + sourceMethod :: String, + -- where does the call go? + targetComp :: String, + targetName :: String + } + deriving (Eq, Show) + +------------------------------ + +parse :: Swagger -> [MakesCallTo] +parse = + mconcat + . fmap parseOperationExtensions + . mconcat + . fmap flattenPathItems + . HM.toList + . _swaggerPaths + +-- | extract path, method, and operation extensions +flattenPathItems :: (FilePath, PathItem) -> [((FilePath, String), HM.InsOrdHashMap Text Value)] +flattenPathItems (path, item) = + filter ((/= mempty) . snd) $ + catMaybes + [ ((path, "get"),) . _operationExtensions <$> _pathItemGet item, + ((path, "put"),) . _operationExtensions <$> _pathItemPut item, + ((path, "post"),) . _operationExtensions <$> _pathItemPost item, + ((path, "delete"),) . _operationExtensions <$> _pathItemDelete item, + ((path, "options"),) . _operationExtensions <$> _pathItemOptions item, + ((path, "head"),) . _operationExtensions <$> _pathItemHead item, + ((path, "patch"),) . _operationExtensions <$> _pathItemPatch item + ] + +parseOperationExtensions :: ((FilePath, String), HM.InsOrdHashMap Text Value) -> [MakesCallTo] +parseOperationExtensions ((path, method), hm) = uncurry (MakesCallTo path method) <$> findCallsFedInfo hm + +findCallsFedInfo :: HM.InsOrdHashMap Text Value -> [(String, String)] +findCallsFedInfo hm = case A.parse parseJSON <$> HM.lookup "wire-makes-federated-call-to" hm of + Just (A.Success (fedcalls :: [(String, String)])) -> fedcalls + Just bad -> error $ "invalid extension `wire-makes-federated-call-to`: expected `[(comp, name), ...]`, got " <> show bad + Nothing -> [] + +------------------------------ + +-- | (this function can be simplified by tossing the serial numbers for nodes, but they might +-- be useful for fine-tuning the output or rendering later.) +-- +-- the layout isn't very useful on realistic data sets. maybe we can tweak it with +-- [layers](https://www.graphviz.org/docs/attr-types/layerRange/)? +mkDotGraph :: [MakesCallTo] -> D.Graph +mkDotGraph inbound = Graph StrictGraph DirectedGraph Nothing (mods <> nodes <> edges) + where + mods = + [ AttributeStatement GraphAttributeStatement [AttributeSetValue (NameId "rankdir") (NameId "LR")], + AttributeStatement NodeAttributeStatement [AttributeSetValue (NameId "shape") (NameId "rectangle")], + AttributeStatement EdgeAttributeStatement [AttributeSetValue (NameId "style") (NameId "dashed")] + ] + nodes = + [ SubgraphStatement (NewSubgraph Nothing (mkCallingNode <$> M.toList callingNodes)), + SubgraphStatement (NewSubgraph Nothing (mkCalledNode <$> M.toList calledNodes)) + ] + edges = mkEdge <$> inbound + + itemSourceNode :: MakesCallTo -> String + itemSourceNode (MakesCallTo path method _ _) = method <> " " <> path + + itemTargetNode :: MakesCallTo -> String + itemTargetNode (MakesCallTo _ _ comp name) = "[" <> comp <> "]:" <> name + + callingNodes :: Map String Integer + callingNodes = + foldl + (\mp (i, caller) -> M.insert caller i mp) + mempty + ((zip [0 ..] . nub $ itemSourceNode <$> inbound) :: [(Integer, String)]) + + calledNodes :: Map String Integer + calledNodes = + foldl + (\mp (i, called) -> M.insert called i mp) + mempty + ((zip [(fromIntegral $ M.size callingNodes) ..] . nub $ itemTargetNode <$> inbound) :: [(Integer, String)]) + + mkCallingNode :: (String, Integer) -> Statement + mkCallingNode n = + NodeStatement (mkCallingNodeId n) [] + + mkCallingNodeId :: (String, Integer) -> NodeId + mkCallingNodeId (caller, i) = + NodeId (NameId . show $ show i <> ": " <> caller) (Just (PortC CompassW)) + + mkCalledNode :: (String, Integer) -> Statement + mkCalledNode n = + NodeStatement (mkCalledNodeId n) [] + + mkCalledNodeId :: (String, Integer) -> NodeId + mkCalledNodeId (callee, i) = + NodeId (NameId . show $ show i <> ": " <> callee) (Just (PortC CompassE)) + + mkEdge :: MakesCallTo -> Statement + mkEdge item = + EdgeStatement + [ ENodeId NoEdge (mkCallingNodeId (caller, callerId)), + ENodeId DirectedEdge (mkCalledNodeId (callee, calleeId)) + ] + [] + where + caller = itemSourceNode item + callee = itemTargetNode item + callerId = fromMaybe (error "impossible") $ M.lookup caller callingNodes + calleeId = fromMaybe (error "impossible") $ M.lookup callee calledNodes + +------------------------------ + +toCsv :: [MakesCallTo] -> String +toCsv = + intercalate "\n" + . fmap (intercalate ",") + . addhdr + . fmap dolines + where + addhdr :: [[String]] -> [[String]] + addhdr = (["source method", "source path", "target component", "target name"] :) + + dolines :: MakesCallTo -> [String] + dolines (MakesCallTo spath smeth tcomp tname) = [smeth, spath, tcomp, tname]