diff --git a/docs/src/main/sphinx/admin/properties-client-protocol.md b/docs/src/main/sphinx/admin/properties-client-protocol.md index 1f27833807b2..abc78a704844 100644 --- a/docs/src/main/sphinx/admin/properties-client-protocol.md +++ b/docs/src/main/sphinx/admin/properties-client-protocol.md @@ -165,11 +165,18 @@ The object storage location to use for spooling segments. Must be accessible by the coordinator and all workers. With the `protocol.spooling.retrieval-mode` retrieval modes `STORAGE` and `COORDINATOR_STORAGE_REDIRECT` the location must also be accessible by all clients. Valid location values vary by object storage -type, and typically follow a pattern of `scheme://bucketName/path/`. +type, and follow these patterns: Examples: -* `s3://my-spooling-bucket/my-segments/` +* **S3:** `s3://my-spooling-bucket/my-segments/` +* **Azure Storage:** `abfss://my-spooling-container@account.dfs.core.windows.net/my-segments/` +* **Google Cloud Storage:** `gs://my-spooling-bucket/my-segments/` + +:::{note} +For Azure Storage, use the ABFS format with hierarchical namespace enabled. +The legacy WASB format (`wasbs://` or `wasb://`) is also supported but deprecated. +::: :::{caution} The specified object storage location must not be used for spooling for another @@ -177,9 +184,9 @@ Trino cluster or any object storage catalog. When using the same object storage for multiple services, you must use separate locations for each one. For example: -* `s3://my-spooling-bucket/my-segments/cluster1-spooling` -* `s3://my-spooling-bucket/my-segments/cluster2-spooling` -* `s3://my-spooling-bucket/my-segments/iceberg-catalog` +* `s3://my-spooling-bucket/my-segments/cluster1-spooling/` +* `s3://my-spooling-bucket/my-segments/cluster2-spooling/` +* `s3://my-spooling-bucket/my-segments/iceberg-catalog/` ::: ### `fs.segment.ttl` @@ -230,7 +237,7 @@ Interval to prune expired segments. ### `fs.segment.pruning.batch-size` -- **Type:** integer +- **Type:** [](prop-type-integer) - **Default value:** `250` Number of expired segments to prune as a single batch operation. @@ -259,4 +266,3 @@ size limits. Prepared statement compression is not applied if the size gain is less than the configured value. Smaller statements do not benefit from compression, and are left uncompressed. - diff --git a/docs/src/main/sphinx/client/client-protocol.md b/docs/src/main/sphinx/client/client-protocol.md index 491174325bd5..cc853fef1c8c 100644 --- a/docs/src/main/sphinx/client/client-protocol.md +++ b/docs/src/main/sphinx/client/client-protocol.md @@ -81,8 +81,8 @@ Azure Storage, and Google Cloud Storage. The object storage system must provide good connectivity for all cluster nodes as well as any clients. Activate the desired system with -`fs.s3.enabled`, `fs.azure.enabled`, or `fs.s3.enabled=true` in -`etc/spooling-manager.properties`and configure further details using relevant +`fs.s3.enabled`, `fs.azure.enabled`, or `fs.gcs.enabled` in +`etc/spooling-manager.properties` and configure further details using relevant properties from [](prop-spooling-file-system), [](/object-storage/file-system-s3), [](/object-storage/file-system-azure), and [](/object-storage/file-system-gcs). @@ -120,7 +120,7 @@ The following client drivers and client applications support the spooling protoc * [Trino Python client](https://github.com/trinodb/trino-python-client), version 0.332.0 and newer -Refer to the documentation for other your specific client drivers and client +Refer to the documentation for your specific client drivers and client applications for up to date information. (protocol-direct)= @@ -142,8 +142,8 @@ characteristics, compared to the spooling protocol: ### Configuration -Use of the direct protocol requires not configuration. Find optional -configuration properties in [](prop-protocol-shared). +Use of the direct protocol requires no configuration. +Find optional configuration properties in [](prop-protocol-shared). ## Development and reference information