Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 42 additions & 1 deletion docs/src/main/sphinx/connector/delta-lake.rst
Original file line number Diff line number Diff line change
Expand Up @@ -293,11 +293,52 @@ statements, the connector supports the following features:
* :doc:`/sql/create-schema`, see also :ref:`delta-lake-create-schema`
* :doc:`/sql/create-table`, see also :ref:`delta-lake-create-table`
* :doc:`/sql/create-table-as`
* :doc:`/sql/drop-schema`
* :doc:`/sql/drop-table`
* :doc:`/sql/alter-table`
* :doc:`/sql/drop-schema`
* :doc:`/sql/show-create-schema`
* :doc:`/sql/show-create-table`

.. _delta-lake-alter-table-execute:

ALTER TABLE EXECUTE
^^^^^^^^^^^^^^^^^^^

The connector supports the following commands for use with
:ref:`ALTER TABLE EXECUTE <alter-table-execute>`.

optimize
""""""""

The ``optimize`` command is used for rewriting the content
of the specified table so that it is merged into fewer but larger files.
In case that the table is partitioned, the data compaction
acts separately on each partition selected for optimization.
This operation improves read performance.

All files with a size below the optional ``file_size_threshold``
parameter (default value for the threshold is ``100MB``) are
merged:

.. code-block:: sql

ALTER TABLE test_table EXECUTE optimize

The following statement merges files in a table that are
under 10 megabytes in size:

.. code-block:: sql

ALTER TABLE test_table EXECUTE optimize(file_size_threshold => '10MB')

You can use a ``WHERE`` clause with the columns used to partition the table,
to filter which partitions are optimized:

.. code-block:: sql

ALTER TABLE test_partitioned_table EXECUTE optimize
WHERE partition_key = 1

.. _delta-lake-special-columns:

Special columns
Expand Down