diff --git a/docs/src/main/sphinx/connector/jmx.rst b/docs/src/main/sphinx/connector/jmx.rst index fe10de75f4c9..780d7810ec29 100644 --- a/docs/src/main/sphinx/connector/jmx.rst +++ b/docs/src/main/sphinx/connector/jmx.rst @@ -16,7 +16,7 @@ Configuration ------------- To configure the JMX connector, create a catalog properties file -``etc/catalog/jmx.properties`` with the following contents: +``etc/catalog/example.properties`` with the following contents: .. code-block:: text @@ -62,14 +62,14 @@ The JMX connector provides two schemas. The first one is ``current`` that contains every MBean from every node in the Trino cluster. You can see all of the available MBeans by running ``SHOW TABLES``:: - SHOW TABLES FROM jmx.current; + SHOW TABLES FROM example.current; MBean names map to non-standard table names, and must be quoted with double quotes when referencing them in a query. For example, the following query shows the JVM version of every node:: SELECT node, vmname, vmversion - FROM jmx.current."java.lang:type=runtime"; + FROM example.current."java.lang:type=runtime"; .. code-block:: text @@ -82,7 +82,7 @@ The following query shows the open and maximum file descriptor counts for each node:: SELECT openfiledescriptorcount, maxfiledescriptorcount - FROM jmx.current."java.lang:type=operatingsystem"; + FROM example.current."java.lang:type=operatingsystem"; .. code-block:: text @@ -96,7 +96,7 @@ This allows matching several MBean objects within a single query. The following returns information from the different Trino memory pools on each node:: SELECT freebytes, node, object_name - FROM jmx.current."trino.memory:*type=memorypool*"; + FROM example.current."trino.memory:*type=memorypool*"; .. code-block:: text @@ -111,7 +111,7 @@ The ``history`` schema contains the list of tables configured in the connector p The tables have the same columns as those in the current schema, but with an additional timestamp column that stores the time at which the snapshot was taken:: - SELECT "timestamp", "uptime" FROM jmx.history."java.lang:type=runtime"; + SELECT "timestamp", "uptime" FROM example.history."java.lang:type=runtime"; .. code-block:: text diff --git a/docs/src/main/sphinx/connector/kafka.rst b/docs/src/main/sphinx/connector/kafka.rst index c368d0c3c771..c4f7e848714e 100644 --- a/docs/src/main/sphinx/connector/kafka.rst +++ b/docs/src/main/sphinx/connector/kafka.rst @@ -39,8 +39,8 @@ Configuration ------------- To configure the Kafka connector, create a catalog properties file -``etc/catalog/kafka.properties`` with the following content, -replacing the properties as appropriate. +``etc/catalog/example.properties`` with the following content, replacing the +properties as appropriate. In some cases, such as when using specialized authentication methods, it is necessary to specify additional Kafka client properties in order to access your Kafka cluster. To do so, @@ -627,9 +627,9 @@ for a Kafka message: .. code-block:: json { - "tableName": "your-table-name", - "schemaName": "your-schema-name", - "topicName": "your-topic-name", + "tableName": "example_table_name", + "schemaName": "example_schema_name", + "topicName": "example_topic_name", "key": { "..." }, "message": { "dataFormat": "raw", @@ -717,9 +717,9 @@ The following is an example CSV field definition in a `table definition file .. code-block:: json { - "tableName": "your-table-name", - "schemaName": "your-schema-name", - "topicName": "your-topic-name", + "tableName": "example_table_name", + "schemaName": "example_schema_name", + "topicName": "example_topic_name", "key": { "..." }, "message": { "dataFormat": "csv", @@ -836,9 +836,9 @@ The following is an example JSON field definition in a `table definition file .. code-block:: json { - "tableName": "your-table-name", - "schemaName": "your-schema-name", - "topicName": "your-topic-name", + "tableName": "example_table_name", + "schemaName": "example_schema_name", + "topicName": "example_topic_name", "key": { "..." }, "message": { "dataFormat": "json", @@ -922,9 +922,9 @@ The following example shows an Avro field definition in a `table definition file .. code-block:: json { - "tableName": "your-table-name", - "schemaName": "your-schema-name", - "topicName": "your-topic-name", + "tableName": "example_table_name", + "schemaName": "example_schema_name", + "topicName": "example_topic_name", "key": { "..." }, "message": { @@ -1046,9 +1046,9 @@ file <#table-definition-files>`__ for a Kafka message: .. code-block:: json { - "tableName": "your-table-name", - "schemaName": "your-schema-name", - "topicName": "your-topic-name", + "tableName": "example_table_name", + "schemaName": "example_schema_name", + "topicName": "example_topic_name", "key": { "..." }, "message": { diff --git a/docs/src/main/sphinx/connector/kinesis.rst b/docs/src/main/sphinx/connector/kinesis.rst index 78959072d99f..ce11ef92779c 100644 --- a/docs/src/main/sphinx/connector/kinesis.rst +++ b/docs/src/main/sphinx/connector/kinesis.rst @@ -23,8 +23,9 @@ stored on Amazon S3 (preferred), or stored in a local directory on each Trino no This connector is a **read-only** connector. It can only fetch data from Kinesis streams, but cannot create streams or push data into existing streams. -To configure the Kinesis connector, create a catalog properties file ``etc/catalog/kinesis.properties`` -with the following contents, replacing the properties as appropriate: +To configure the Kinesis connector, create a catalog properties file +``etc/catalog/example.properties`` with the following contents, replacing the +properties as appropriate: .. code-block:: text diff --git a/docs/src/main/sphinx/connector/kudu.rst b/docs/src/main/sphinx/connector/kudu.rst index d7bf385f6660..436188f9fbcf 100644 --- a/docs/src/main/sphinx/connector/kudu.rst +++ b/docs/src/main/sphinx/connector/kudu.rst @@ -98,11 +98,11 @@ The emulation of schemas is disabled by default. In this case all Kudu tables are part of the ``default`` schema. For example, a Kudu table named ``orders`` can be queried in Trino -with ``SELECT * FROM kudu.default.orders`` or simple with ``SELECT * FROM orders`` +with ``SELECT * FROM example.default.orders`` or simple with ``SELECT * FROM orders`` if catalog and schema are set to ``kudu`` and ``default`` respectively. Table names can contain any characters in Kudu. In this case, use double quotes. -E.g. To query a Kudu table named ``special.table!`` use ``SELECT * FROM kudu.default."special.table!"``. +E.g. To query a Kudu table named ``special.table!`` use ``SELECT * FROM example.default."special.table!"``. Example @@ -110,7 +110,7 @@ Example * Create a users table in the default schema:: - CREATE TABLE kudu.default.users ( + CREATE TABLE example.default.users ( user_id int WITH (primary_key = true), first_name varchar, last_name varchar @@ -125,7 +125,7 @@ Example * Describe the table:: - DESCRIBE kudu.default.users; + DESCRIBE example.default.users; .. code-block:: text @@ -138,19 +138,20 @@ Example * Insert some data:: - INSERT INTO kudu.default.users VALUES (1, 'Donald', 'Duck'), (2, 'Mickey', 'Mouse'); + INSERT INTO example.default.users VALUES (1, 'Donald', 'Duck'), (2, 'Mickey', 'Mouse'); * Select the inserted data:: - SELECT * FROM kudu.default.users; + SELECT * FROM example.default.users; .. _behavior-with-schema-emulation: Behavior with schema emulation ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -If schema emulation has been enabled in the connector properties, i.e. ``etc/catalog/kudu.properties``, -tables are mapped to schemas depending on some conventions. +If schema emulation has been enabled in the connector properties, i.e. +``etc/catalog/example.properties``, tables are mapped to schemas depending on +some conventions. * With ``kudu.schema-emulation.enabled=true`` and ``kudu.schema-emulation.prefix=``, the mapping works like: @@ -424,7 +425,7 @@ Example: .. code-block:: sql - CREATE TABLE mytable ( + CREATE TABLE example_table ( name varchar WITH (primary_key = true, encoding = 'dictionary', compression = 'snappy'), index bigint WITH (nullable = true, encoding = 'runlength', compression = 'lz4'), comment varchar WITH (nullable = true, encoding = 'plain', compression = 'default'), @@ -441,7 +442,7 @@ You can specify the same column properties as on creating a table. Example:: - ALTER TABLE mytable ADD COLUMN extraInfo varchar WITH (nullable = true, encoding = 'plain') + ALTER TABLE example_table ADD COLUMN extraInfo varchar WITH (nullable = true, encoding = 'plain') See also `Column Properties`_. @@ -452,9 +453,9 @@ See also `Column Properties`_. Procedures ---------- -* ``CALL kudu.system.add_range_partition`` see :ref:`managing-range-partitions` +* ``CALL example.system.add_range_partition`` see :ref:`managing-range-partitions` -* ``CALL kudu.system.drop_range_partition`` see :ref:`managing-range-partitions` +* ``CALL example.system.drop_range_partition`` see :ref:`managing-range-partitions` Partitioning design ^^^^^^^^^^^^^^^^^^^ @@ -481,7 +482,7 @@ primary key. Example:: - CREATE TABLE mytable ( + CREATE TABLE example_table ( col1 varchar WITH (primary_key=true), col2 varchar WITH (primary_key=true), ... @@ -499,7 +500,7 @@ of table properties named ``partition_by_second_hash_columns`` and Example:: - CREATE TABLE mytable ( + CREATE TABLE example_table ( col1 varchar WITH (primary_key=true), col2 varchar WITH (primary_key=true), ... @@ -595,13 +596,13 @@ partition. .. code-block:: sql - CALL kudu.system.add_range_partition(, , ) + CALL example.system.add_range_partition(,
, ) - dropping a range partition .. code-block:: sql - CALL kudu.system.drop_range_partition(,
, ) + CALL example.system.drop_range_partition(,
, ) - ````: schema of the table @@ -638,10 +639,10 @@ partition. Example:: - CALL kudu.system.add_range_partition('myschema', 'events', '{"lower": "2018-01-01", "upper": "2018-06-01"}') + CALL example.system.add_range_partition('example_schema', 'events', '{"lower": "2018-01-01", "upper": "2018-06-01"}') This adds a range partition for a table ``events`` in the schema -``myschema`` with the lower bound ``2018-01-01``, more exactly +``example_schema`` with the lower bound ``2018-01-01``, more exactly ``2018-01-01T00:00:00.000``, and the upper bound ``2018-07-01``. Use the SQL statement ``SHOW CREATE TABLE`` to query the existing diff --git a/docs/src/main/sphinx/connector/localfile.rst b/docs/src/main/sphinx/connector/localfile.rst index 37a7be154f2a..736146b11723 100644 --- a/docs/src/main/sphinx/connector/localfile.rst +++ b/docs/src/main/sphinx/connector/localfile.rst @@ -8,8 +8,9 @@ the local file system of each worker. Configuration ------------- -To configure the local file connector, create a catalog properties file -under ``etc/catalog`` named, for example, ``localfile.properties`` with the following contents: +To configure the local file connector, create a catalog properties file under +``etc/catalog`` named, for example, ``example.properties`` with the following +contents: .. code-block:: text @@ -32,7 +33,7 @@ Local file connector schemas and tables The local file connector provides a single schema named ``logs``. You can see all the available tables by running ``SHOW TABLES``:: - SHOW TABLES FROM localfile.logs; + SHOW TABLES FROM example.logs; ``http_request_log`` ^^^^^^^^^^^^^^^^^^^^