Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions docs/src/main/sphinx/connector/jmx.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Configuration
-------------

To configure the JMX connector, create a catalog properties file
``etc/catalog/jmx.properties`` with the following contents:
``etc/catalog/example.properties`` with the following contents:

.. code-block:: text

Expand Down Expand Up @@ -62,14 +62,14 @@ The JMX connector provides two schemas.
The first one is ``current`` that contains every MBean from every node in the Trino
cluster. You can see all of the available MBeans by running ``SHOW TABLES``::

SHOW TABLES FROM jmx.current;
SHOW TABLES FROM example.current;

MBean names map to non-standard table names, and must be quoted with
double quotes when referencing them in a query. For example, the
following query shows the JVM version of every node::

SELECT node, vmname, vmversion
FROM jmx.current."java.lang:type=runtime";
FROM example.current."java.lang:type=runtime";

.. code-block:: text

Expand All @@ -82,7 +82,7 @@ The following query shows the open and maximum file descriptor counts
for each node::

SELECT openfiledescriptorcount, maxfiledescriptorcount
FROM jmx.current."java.lang:type=operatingsystem";
FROM example.current."java.lang:type=operatingsystem";

.. code-block:: text

Expand All @@ -96,7 +96,7 @@ This allows matching several MBean objects within a single query. The following
returns information from the different Trino memory pools on each node::

SELECT freebytes, node, object_name
FROM jmx.current."trino.memory:*type=memorypool*";
FROM example.current."trino.memory:*type=memorypool*";

.. code-block:: text

Expand All @@ -111,7 +111,7 @@ The ``history`` schema contains the list of tables configured in the connector p
The tables have the same columns as those in the current schema, but with an additional
timestamp column that stores the time at which the snapshot was taken::

SELECT "timestamp", "uptime" FROM jmx.history."java.lang:type=runtime";
SELECT "timestamp", "uptime" FROM example.history."java.lang:type=runtime";

.. code-block:: text

Expand Down
34 changes: 17 additions & 17 deletions docs/src/main/sphinx/connector/kafka.rst
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,8 @@ Configuration
-------------

To configure the Kafka connector, create a catalog properties file
``etc/catalog/kafka.properties`` with the following content,
replacing the properties as appropriate.
``etc/catalog/example.properties`` with the following content, replacing the
properties as appropriate.

In some cases, such as when using specialized authentication methods, it is necessary to specify
additional Kafka client properties in order to access your Kafka cluster. To do so,
Expand Down Expand Up @@ -627,9 +627,9 @@ for a Kafka message:
.. code-block:: json

{
"tableName": "your-table-name",
"schemaName": "your-schema-name",
"topicName": "your-topic-name",
"tableName": "example_table_name",
"schemaName": "example_schema_name",
"topicName": "example_topic_name",
"key": { "..." },
"message": {
"dataFormat": "raw",
Expand Down Expand Up @@ -717,9 +717,9 @@ The following is an example CSV field definition in a `table definition file
.. code-block:: json

{
"tableName": "your-table-name",
"schemaName": "your-schema-name",
"topicName": "your-topic-name",
"tableName": "example_table_name",
"schemaName": "example_schema_name",
"topicName": "example_topic_name",
"key": { "..." },
"message": {
"dataFormat": "csv",
Expand Down Expand Up @@ -836,9 +836,9 @@ The following is an example JSON field definition in a `table definition file
.. code-block:: json

{
"tableName": "your-table-name",
"schemaName": "your-schema-name",
"topicName": "your-topic-name",
"tableName": "example_table_name",
"schemaName": "example_schema_name",
"topicName": "example_topic_name",
"key": { "..." },
"message": {
"dataFormat": "json",
Expand Down Expand Up @@ -922,9 +922,9 @@ The following example shows an Avro field definition in a `table definition file
.. code-block:: json

{
"tableName": "your-table-name",
"schemaName": "your-schema-name",
"topicName": "your-topic-name",
"tableName": "example_table_name",
"schemaName": "example_schema_name",
"topicName": "example_topic_name",
"key": { "..." },
"message":
{
Expand Down Expand Up @@ -1046,9 +1046,9 @@ file <#table-definition-files>`__ for a Kafka message:
.. code-block:: json

{
"tableName": "your-table-name",
"schemaName": "your-schema-name",
"topicName": "your-topic-name",
"tableName": "example_table_name",
"schemaName": "example_schema_name",
"topicName": "example_topic_name",
"key": { "..." },
"message":
{
Expand Down
5 changes: 3 additions & 2 deletions docs/src/main/sphinx/connector/kinesis.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,9 @@ stored on Amazon S3 (preferred), or stored in a local directory on each Trino no
This connector is a **read-only** connector. It can only fetch data from Kinesis streams,
but cannot create streams or push data into existing streams.

To configure the Kinesis connector, create a catalog properties file ``etc/catalog/kinesis.properties``
with the following contents, replacing the properties as appropriate:
To configure the Kinesis connector, create a catalog properties file
``etc/catalog/example.properties`` with the following contents, replacing the
properties as appropriate:

.. code-block:: text

Expand Down
37 changes: 19 additions & 18 deletions docs/src/main/sphinx/connector/kudu.rst
Original file line number Diff line number Diff line change
Expand Up @@ -98,19 +98,19 @@ The emulation of schemas is disabled by default.
In this case all Kudu tables are part of the ``default`` schema.

For example, a Kudu table named ``orders`` can be queried in Trino
with ``SELECT * FROM kudu.default.orders`` or simple with ``SELECT * FROM orders``
with ``SELECT * FROM example.default.orders`` or simple with ``SELECT * FROM orders``
if catalog and schema are set to ``kudu`` and ``default`` respectively.

Table names can contain any characters in Kudu. In this case, use double quotes.
E.g. To query a Kudu table named ``special.table!`` use ``SELECT * FROM kudu.default."special.table!"``.
E.g. To query a Kudu table named ``special.table!`` use ``SELECT * FROM example.default."special.table!"``.


Example
~~~~~~~

* Create a users table in the default schema::

CREATE TABLE kudu.default.users (
CREATE TABLE example.default.users (
user_id int WITH (primary_key = true),
first_name varchar,
last_name varchar
Expand All @@ -125,7 +125,7 @@ Example

* Describe the table::

DESCRIBE kudu.default.users;
DESCRIBE example.default.users;

.. code-block:: text

Expand All @@ -138,19 +138,20 @@ Example

* Insert some data::

INSERT INTO kudu.default.users VALUES (1, 'Donald', 'Duck'), (2, 'Mickey', 'Mouse');
INSERT INTO example.default.users VALUES (1, 'Donald', 'Duck'), (2, 'Mickey', 'Mouse');

* Select the inserted data::

SELECT * FROM kudu.default.users;
SELECT * FROM example.default.users;

.. _behavior-with-schema-emulation:

Behavior with schema emulation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

If schema emulation has been enabled in the connector properties, i.e. ``etc/catalog/kudu.properties``,
tables are mapped to schemas depending on some conventions.
If schema emulation has been enabled in the connector properties, i.e.
``etc/catalog/example.properties``, tables are mapped to schemas depending on
some conventions.

* With ``kudu.schema-emulation.enabled=true`` and ``kudu.schema-emulation.prefix=``,
the mapping works like:
Expand Down Expand Up @@ -424,7 +425,7 @@ Example:

.. code-block:: sql

CREATE TABLE mytable (
CREATE TABLE example_table (
name varchar WITH (primary_key = true, encoding = 'dictionary', compression = 'snappy'),
index bigint WITH (nullable = true, encoding = 'runlength', compression = 'lz4'),
comment varchar WITH (nullable = true, encoding = 'plain', compression = 'default'),
Expand All @@ -441,7 +442,7 @@ You can specify the same column properties as on creating a table.

Example::

ALTER TABLE mytable ADD COLUMN extraInfo varchar WITH (nullable = true, encoding = 'plain')
ALTER TABLE example_table ADD COLUMN extraInfo varchar WITH (nullable = true, encoding = 'plain')

See also `Column Properties`_.

Expand All @@ -452,9 +453,9 @@ See also `Column Properties`_.
Procedures
----------

* ``CALL kudu.system.add_range_partition`` see :ref:`managing-range-partitions`
* ``CALL example.system.add_range_partition`` see :ref:`managing-range-partitions`

* ``CALL kudu.system.drop_range_partition`` see :ref:`managing-range-partitions`
* ``CALL example.system.drop_range_partition`` see :ref:`managing-range-partitions`

Partitioning design
^^^^^^^^^^^^^^^^^^^
Expand All @@ -481,7 +482,7 @@ primary key.

Example::

CREATE TABLE mytable (
CREATE TABLE example_table (
col1 varchar WITH (primary_key=true),
col2 varchar WITH (primary_key=true),
...
Expand All @@ -499,7 +500,7 @@ of table properties named ``partition_by_second_hash_columns`` and

Example::

CREATE TABLE mytable (
CREATE TABLE example_table (
col1 varchar WITH (primary_key=true),
col2 varchar WITH (primary_key=true),
...
Expand Down Expand Up @@ -595,13 +596,13 @@ partition.

.. code-block:: sql

CALL kudu.system.add_range_partition(<schema>, <table>, <range_partition_as_json_string>)
CALL example.system.add_range_partition(<schema>, <table>, <range_partition_as_json_string>)

- dropping a range partition

.. code-block:: sql

CALL kudu.system.drop_range_partition(<schema>, <table>, <range_partition_as_json_string>)
CALL example.system.drop_range_partition(<schema>, <table>, <range_partition_as_json_string>)

- ``<schema>``: schema of the table

Expand Down Expand Up @@ -638,10 +639,10 @@ partition.

Example::

CALL kudu.system.add_range_partition('myschema', 'events', '{"lower": "2018-01-01", "upper": "2018-06-01"}')
CALL example.system.add_range_partition('example_schema', 'events', '{"lower": "2018-01-01", "upper": "2018-06-01"}')

This adds a range partition for a table ``events`` in the schema
``myschema`` with the lower bound ``2018-01-01``, more exactly
``example_schema`` with the lower bound ``2018-01-01``, more exactly
``2018-01-01T00:00:00.000``, and the upper bound ``2018-07-01``.

Use the SQL statement ``SHOW CREATE TABLE`` to query the existing
Expand Down
7 changes: 4 additions & 3 deletions docs/src/main/sphinx/connector/localfile.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,9 @@ the local file system of each worker.
Configuration
-------------

To configure the local file connector, create a catalog properties file
under ``etc/catalog`` named, for example, ``localfile.properties`` with the following contents:
To configure the local file connector, create a catalog properties file under
``etc/catalog`` named, for example, ``example.properties`` with the following
contents:

.. code-block:: text

Expand All @@ -32,7 +33,7 @@ Local file connector schemas and tables
The local file connector provides a single schema named ``logs``.
You can see all the available tables by running ``SHOW TABLES``::

SHOW TABLES FROM localfile.logs;
SHOW TABLES FROM example.logs;

``http_request_log``
^^^^^^^^^^^^^^^^^^^^
Expand Down