Skip to content

Commit

Permalink
Update databricks-labs-lsql requirement from ~=0.4.0 to >=0.4,<0.6 (#…
Browse files Browse the repository at this point in the history
…2076)

Updates the requirements on
[databricks-labs-lsql](https://github.com/databrickslabs/lsql) to permit
the latest version.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/databrickslabs/lsql/releases">databricks-labs-lsql's
releases</a>.</em></p>
<blockquote>
<h2>v0.5.0</h2>
<ul>
<li>Added Command Execution backend which uses Command Execution API on
a cluster (<a
href="https://github.com/databrickslabs/lsql/issues/95">#95</a>).
In this release, the databricks labs lSQL library has been updated with
a new Command Execution backend that utilizes the Command Execution API.
A new <code>CommandExecutionBackend</code> class has been implemented,
which initializes a <code>CommandExecutor</code> instance taking a
cluster ID, workspace client, and language as parameters. The
<code>execute</code> method runs SQL commands on the specified cluster,
and the <code>fetch</code> method returns the query result as an
iterator of Row objects. The existing
<code>StatementExecutionBackend</code> class has been updated to inherit
from a new abstract base class called <code>ExecutionBackend</code>,
which includes a <code>save_table</code> method for saving data to
tables and is meant to be a common base class for both Statement and
Command Execution backends. The <code>StatementExecutionBackend</code>
class has also been updated to use the new <code>ExecutionBackend</code>
abstract class and its constructor now accepts a
<code>max_records_per_batch</code> parameter. The <code>execute</code>
and <code>fetch</code> methods have been updated to use the new
<code>_only_n_bytes</code> method for logging truncated SQL statements.
Additionally, the <code>CommandExecutionBackend</code> class has several
methods, <code>execute</code>, <code>fetch</code>, and
<code>save_table</code> to execute commands on a cluster and save the
results to tables in the databricks workspace. This new backend is
intended to be used for executing commands on a cluster and saving the
results in a databricks workspace.</li>
<li>Added basic integration with Lakeview Dashboards (<a
href="https://github.com/databrickslabs/lsql/issues/66">#66</a>).
In this release, we've added basic integration with Lakeview Dashboards
to the project, enhancing its capabilities. This includes updating the
<code>databricks-labs-blueprint</code> dependency to version 0.4.2 with
the <code>[yaml]</code> extra, allowing for additional functionality
related to handling YAML files. A new file, <code>dashboards.py</code>,
has been introduced, providing a class for interacting with Databricks
dashboards, along with methods for retrieving and saving dashboard
configurations. Additionally, a new <code>__init__.py</code> file under
the <code>src/databricks/labs/lsql/lakeview</code> directory imports all
classes and functions from the <code>model.py</code> module, providing a
foundation for further development and customization. The release also
introduces a new file, <code>model.py</code>, containing code generated
from OpenAPI specs by the Databricks SDK Generator, and a template file,
<code>model.py.tmpl</code>, used for handling JSON data during
integration with Lakeview Dashboards. A new file,
<code>polymorphism.py</code>, provides utilities for checking if a value
can be assigned to a specific type, supporting correct data typing and
formatting with Lakeview Dashboards. Furthermore, a
<code>.gitignore</code> file has been added to the
<code>tests/integration</code> directory as part of the initial steps in
adding integration testing to ensure compatibility with the Lakeview
Dashboards platform. Lastly, the <code>test_dashboards.py</code> file in
the <code>tests/integration</code> directory contains a function,
<code>test_load_dashboard(ws)</code>, which uses the
<code>Dashboards</code> class to save a dashboard from a source to a
destination path, facilitating testing during the integration
process.</li>
<li>Added dashboard-as-code functionality (<a
href="https://github.com/databrickslabs/lsql/issues/201">#201</a>).
This commit introduces dashboard-as-code functionality for the UCX
project, enabling the creation and management of dashboards using code.
The feature resolves multiple issues and includes a new
<code>create-dashboard</code> command for creating unpublished
dashboards. The functionality is available in the <code>lsql</code> lab
and allows for specifying the order and width of widgets, overriding
default widget identifiers, and supporting various SQL and markdown
header arguments. The <code>dashboard.yml</code> file is used to define
top-level metadata for the dashboard. This commit also includes
extensive documentation and examples for using the dashboard as a
library and configuring different options.</li>
<li>Automate opening integration test dashboard in debug mode (<a
href="https://github.com/databrickslabs/lsql/issues/167">#167</a>).
A new feature has been added to automatically open the integration test
dashboard in debug mode, making it easier for software engineers to
debug and troubleshoot. This has been achieved by importing the
<code>webbrowser</code> and <code>is_in_debug</code> modules from
&quot;databricks.labs.blueprint.entrypoint&quot;, and adding a check in
the <code>create</code> function to determine if the code is running in
debug mode. If it is, a dashboard URL is constructed from the workspace
configuration and dashboard ID, and then opened in a web browser using
&quot;webbrowser.open&quot;. This allows for a more streamlined
debugging process for the integration test dashboard. No other parts of
the code have been affected by this change.</li>
<li>Automatically tile widgets (<a
href="https://github.com/databrickslabs/lsql/issues/109">#109</a>).
In this release, we've introduced an automatic widget tiling feature for
the dashboard creation process in our open-source library. The
<code>Dashboards</code> class now includes a new class variable,
<code>_maximum_dashboard_width</code>, set to 6, representing the
maximum width allowed for each row of widgets in the dashboard. The
<code>create_dashboard</code> method has been updated to accept a new
<code>self</code> parameter, turning it into an instance method. A new
<code>_get_position</code> method has been introduced to calculate and
return the next available position for placing a widget, and a
<code>_get_width_and_height</code> method has been added to return the
width and height for a widget specification, initially handling
<code>CounterSpec</code> instances. Additionally, we've added new unit
tests to improve testing coverage, ensuring that widgets are created,
positioned, and sized correctly. These tests also cover the correct
positioning of widgets based on their order and available space, as well
as the expected width and height for each widget.</li>
<li>Bump actions/checkout from 4.1.3 to 4.1.6 (<a
href="https://github.com/databrickslabs/lsql/issues/102">#102</a>).
In the latest release, the 'actions/checkout' GitHub Action has been
updated from version 4.1.3 to 4.1.6, which includes checking the
platform to set the archive extension appropriately. This release also
bumps the version of github/codeql-action from 2 to 3,
actions/setup-node from 1 to 4, and actions/upload-artifact from 2 to 4.
Additionally, the minor-actions-dependencies group was updated with two
new versions. Disabling extensions.worktreeConfig when disabling
sparse-checkout was introduced in version 4.1.4. The release notes and
changelog for this update can be found in the provided link. This commit
was made by dependabot[bot] with contributions from cory-miller and
jww3.</li>
<li>Bump actions/checkout from 4.1.6 to 4.1.7 (<a
href="https://github.com/databrickslabs/lsql/issues/151">#151</a>).
In the latest release, the 'actions/checkout' GitHub action has been
updated from version 4.1.6 to 4.1.7 in the project's push workflow,
which checks out the repository at the start of the workflow. This
change brings potential bug fixes, performance improvements, or new
features compared to the previous version. The update only affects the
version number in the YAML configuration for the 'actions/checkout' step
in the release.yml file, with no new methods or alterations to existing
functionality. This update aims to ensure a smooth and enhanced user
experience for those utilizing the project's push workflows by taking
advantage of the possible improvements or bug fixes in the new version
of 'actions/checkout'.</li>
<li>Create a dashboard with a counter from a single query (<a
href="https://github.com/databrickslabs/lsql/issues/107">#107</a>).
In this release, we have introduced several enhancements to our
dashboard-as-code approach, including the creation of a
<code>Dashboards</code> class that provides methods for getting, saving,
and deploying dashboards. A new method, <code>create_dashboard</code>,
has been added to create a dashboard with a single page containing a
counter widget. The counter widget is associated with a query that
counts the number of rows in a specified dataset. The
<code>deploy_dashboard</code> method has also been added to deploy the
dashboard to the workspace. Additionally, we have implemented a new
feature for creating dashboards with a counter from a single query,
including modifications to the <code>test_dashboards.py</code> file and
the addition of four new tests. These changes improve the robustness of
the dashboard creation process and provide a more automated way to view
important metrics.</li>
<li>Create text widget from markdown file (<a
href="https://github.com/databrickslabs/lsql/issues/142">#142</a>).
A new feature has been implemented in the library that allows for the
creation of a text widget from a markdown file, enhancing customization
and readability for users. This development resolves issue <a
href="https://github.com/databrickslabs/lsql/issues/1">#1</a></li>
<li>Design document for dashboards-as-code (<a
href="https://github.com/databrickslabs/lsql/issues/105">#105</a>).
&quot;The latest release introduces 'Dashboards as Code,' a method for
defining and managing dashboards through configuration files, enabling
version control and controlled changes. The building blocks include
<code>.sql</code>, <code>.md</code>, and <code>dashboard.yml</code>
files, with <code>.sql</code> defining queries and determining tile
order, and <code>dashboard.yml</code> specifying top-level metadata and
tile overrides. Metadata can be inferred or explicitly defined in the
query or files. The tile order can be determined by SQL file order,
<code>tiles</code> order in <code>dashboard.yml</code>, or SQL file
metadata. This project can also be used as a library for embedding
dashboard generation in your code. Configuration precedence follows
command-line flags, SQL file headers, <code>dashboard.yml</code>, and
SQL query content. The command-line interface is utilized for dashboard
generation from configuration files.&quot;</li>
<li>Ensure propagation of <code>lsql</code> version into
<code>User-Agent</code> header when it is used as library (<a
href="https://github.com/databrickslabs/lsql/issues/206">#206</a>).
In this release, the <code>pyproject.toml</code> file has been updated
to ensure that the correct version of the <code>lsql</code> library is
propagated into the <code>User-Agent</code> header when used as a
library, improving attribution. The <code>databricks-sdk</code> version
has been updated from <code>0.22.0</code> to <code>0.29.0</code>, and
the <code>__init__.py</code> file of the <code>lsql</code> library has
been modified to add the <code>with_user_agent_extra</code> function
from the <code>databricks.sdk.core</code> package for correct
attribution. The <code>backends.py</code> file has also been updated
with improved type handling in the <code>_row_to_sql</code> and
<code>save_table</code> functions for accurate SQL insertion and
handling of user-defined classes. Additionally, a test has been added to
ensure that the <code>lsql</code> version is correctly propagated in the
<code>User-Agent</code> header when used as a library. These changes
offer improved functionality and accurate type handling, making it
easier for developers to identify the library version when used in other
projects.</li>
<li>Fixed counter encodings (<a
href="https://github.com/databrickslabs/lsql/issues/143">#143</a>).
In this release, we have improved the encoding of counters in the lsql
dashboard by modifying the <code>create_dashboard</code> function in the
<code>dashboards.py</code> file. Previously, the counter field encoding
was hardcoded as &quot;count,&quot; but has been changed to dynamically
determine the first field name of the given fields, ensuring that
counters are expected to have only one field. Additionally, a new
integration test has been added to the
<code>tests/integration/test_dashboards.py</code> file to ensure that
the dashboard deployment functionality correctly handles SQL queries
that do not perform a count. A new test for the <code>Dashboards</code>
class has also been added to check that counter field encoding names are
created as expected. The <code>WorkspaceClient</code> is mocked and not
called in this test. These changes enhance the accuracy of counter
encoding and improve the overall functionality and reliability of the
lsql dashboard.</li>
<li>Fixed non-existing reference and typo in the documentation (<a
href="https://github.com/databrickslabs/lsql/issues/104">#104</a>).
In this release, we've made improvements to the documentation of our
open-source library, specifically addressing issue <a
href="https://github.com/databrickslabs/lsql/issues/104">#104</a>.
The changes include fixing a non-existent reference and a typo in the
<code>Library size comparison</code> section of the
&quot;comparison.md&quot; document. This section provides guidance for
selecting a library based on factors like library size, unified
authentication, and compatibility with various Databricks warehouses and
SQL Python APIs. The updates clarify the required dependency size for
simple applications and scripts, and offer more detailed information
about each library option. We've also added a new subsection titled
<code>Detailed comparison</code> to provide a more comprehensive
overview of each library's features. These changes are intended to help
software engineers better understand which library is best suited for
their specific needs, particularly for applications that require data
transfer of large amounts of data serialized in Apache Arrow format and
low result fetching latency, where we recommend using the Databricks SQL
Connector for Python for efficient data transfer and low latency.</li>
<li>Fixed parsing message (<a
href="https://github.com/databrickslabs/lsql/issues/146">#146</a>).
In this release, the warning message logged during the creation of a
dashboard when a ParseError occurs has been updated to provide clearer
and more detailed information about the parsing error. The new error
message now includes the specific query being parsed and the exact
parsing error, enabling developers to quickly identify the cause of
parsing issues. This change ensures that engineers can efficiently
diagnose and address parsing errors, improving the overall development
and debugging experience with a more informative log format:
&quot;Parsing {query}: {error}&quot;.</li>
<li>Improve dashboard as code (<a
href="https://github.com/databrickslabs/lsql/issues/108">#108</a>).
The <code>Dashboards</code> class in the 'dashboards.py' file has been
updated to improve functionality and usability, with changes such as the
addition of a type variable <code>T</code> for type checking and more
descriptive names for methods. The <code>save_to_folder</code> method
now accepts a <code>Dashboard</code> object and returns a
<code>Dashboard</code> object, and a new static method
<code>create_dashboard</code> has been added. Additionally, two new
methods <code>_with_better_names</code> and <code>_replace_names</code>
have been added for improved readability. The <code>get_dashboard</code>
method now returns a <code>Dashboard</code> object instead of a
dictionary. The <code>save_to_folder</code> method now also formats SQL
code before saving it to file. These changes aim to enhance the
functionality and readability of the codebase and provide more
user-friendly methods for interacting with the <code>Dashboards</code>
class. In addition to the changes in the <code>Dashboards</code> class,
there have been updates in the organization of the project structure.
The 'queries/counter.sql' file has been moved to
'dashboards/one_counter/counter.sql' in the 'tests/integration'
directory. This modification enhances the organization of the project.
Furthermore, several tests for the <code>Dashboards</code> class have
been introduced in the 'databricks.labs.lsql.dashboards' module,
demonstrating various functionalities of the class and ensuring that it
functions as intended. The tests cover saving SQL and YML files to a
specified folder, creating a dataset and a counter widget for each
query, deploying dashboards with a given display name or dashboard ID,
and testing the behavior of the <code>save_to_folder</code> and
<code>deploy_dashboard</code> methods. Lastly, the commit removes the
<code>test_load_dashboard</code> function and updates the
<code>test_dashboard_creates_one_dataset_per_query</code> and
<code>test_dashboard_creates_one_counter_widget_per_query</code>
functions to use the updated <code>Dashboard</code> class. A new
<code>replace_recursively</code> function is introduced to replace
specific fields in a dataclass recursively. A new test function
<code>test_dashboards_deploys_exported_dashboard_definition</code> has
been added, which reads a dashboard definition from a JSON file, deploys
it, and checks if it's successfully deployed using the
<code>Dashboards</code> class. A new test function
<code>test_dashboard_deploys_dashboard_the_same_as_created_dashboard</code>
has also been added, which compares the original and deployed dashboards
to ensure they are identical. Overall, these changes aim to improve the
functionality and readability of the codebase and provide more
user-friendly methods for interacting with the <code>Dashboards</code>
class, as well as enhance the organization of the project structure and
add new tests for the <code>Dashboards</code> class to ensure it
functions as intended.</li>
<li>Infer fields from a query (<a
href="https://github.com/databrickslabs/lsql/issues/111">#111</a>).
The <code>Dashboards</code> class in the <code>dashboards.py</code> file
has been updated with the addition of a new method,
<code>_get_fields</code>, which accepts a SQL query as input and returns
a list of <code>Field</code> objects using the <code>sqlglot</code>
library to parse the query and extract the necessary information. The
<code>create_dashboard</code> method has been modified to call this new
function when creating <code>Query</code> objects for each dataset. If a
<code>ParseError</code> occurs, a warning is logged and iteration
continues. This allows for the automatic population of fields when
creating a new dashboard, eliminating the need for manual specification.
Additionally, new tests have been added for invalid queries and for
checking if the fields in a query have the expected names. These tests
include <code>test_dashboards_skips_invalid_query</code> and
<code>test_dashboards_gets_fields_with_expected_names</code>, which
utilize the caplog fixture and create temporary query files to verify
functionality. Existing functionality related to creating dashboards
remains unchanged.</li>
<li>Make constant all caps (<a
href="https://github.com/databrickslabs/lsql/issues/140">#140</a>).
In this release, the project's 'dashboards.py' file has been updated to
improve code readability and maintainability. A constant variable
<code>_maximum_dashboard_width</code> has been changed to all caps,
becoming '_MAXIMUM_DASHBOARD_WIDTH'. This modification affects the
<code>Dashboards</code> class and its methods, particularly
<code>_get_fields</code> and '_get_position'. The
<code>_get_position</code> method has been revised to use the new all
caps constant variable. This change ensures better visibility of
constants within the code, addressing issue <a
href="https://github.com/databrickslabs/lsql/issues/140">#140</a>.
It's important to note that this modification only impacts the
'dashboards.py' file and does not affect any other functionalities.</li>
<li>Read display name from <code>dashboard.yml</code> (<a
href="https://github.com/databrickslabs/lsql/issues/144">#144</a>).
In this release, we have introduced a new <code>DashboardMetadata</code>
dataclass that reads the display name of a dashboard from a
<code>dashboard.yml</code> file located in the dashboard's directory. If
the <code>dashboard.yml</code> file is absent, the folder name will be
used as the display name. This change improves the readability and
maintainability of the dashboard configuration by explicitly defining
the display name and reducing the need to specify widget information in
multiple places. We have also added a new fixture called
<code>make_dashboard</code> for creating and cleaning up lakeview
dashboards in the test suite. The fixture handles creation and deletion
of the dashboard and provides an option to set a custom display name.
Additionally, we have added and modified several unit tests to ensure
the proper handling of the <code>DashboardMetadata</code> class and the
dashboard creation process, including tests for missing, present, or
incorrect <code>display_name</code> keys in the YAML file. The
<code>dashboards.deploy_dashboard()</code> function has been updated to
handle cases where only <code>dashboard_id</code> is provided.</li>
<li>Set widget id in query header (<a
href="https://github.com/databrickslabs/lsql/issues/154">#154</a>).
In this release, we've made significant improvements to widget metadata
handling in our open-source library. We've introduced a new
<code>WidgetMetadata</code> class that replaces the previous
<code>WidgetMetadata</code> dataclass, now featuring a <code>path</code>
attribute, <code>spec_type</code> property, and optional parameters for
<code>order</code>, <code>width</code>, <code>height</code>, and
<code>_id</code>. The <code>_get_widgets</code> method has been updated
to accept an Iterable of <code>WidgetMetadata</code> objects, and both
<code>_get_layouts</code> and <code>_get_widgets</code> methods now sort
widgets using the order field. A new class method,
<code>WidgetMetadata.from_path</code>, handles parsing widget metadata
from a file path, replacing the removed
<code>_get_width_and_height</code> method. Additionally, the
<code>WidgetMetadata</code> class is now used in the
<code>deploy_dashboard</code> method, and the test suite for the
<code>dashboards</code> module has been enhanced with updated
<code>test_widget_metadata_replaces_width_and_height</code> and
<code>test_widget_metadata_replaces_attribute</code> functions, as well
as new tests for specific scenarios. Issue <a
href="https://github.com/databrickslabs/lsql/issues/154">#154</a>
has been addressed by setting the widget id in the query header, and the
aforementioned changes improve flexibility and ease of use for dashboard
development.</li>
<li>Use order key in query header if defined (<a
href="https://github.com/databrickslabs/lsql/issues/149">#149</a>).
In this release, we've introduced a new feature to use an order key in
the query header if defined, enhancing the flexibility and control over
the dashboard creation process. The <code>WidgetMetadata</code>
dataclass now includes an optional <code>order</code> parameter of type
<code>int</code>, and the <code>_get_arguments_parser()</code> method
accepts the <code>--order</code> flag with type <code>int</code>. The
<code>replace_from_arguments()</code> method has been updated to support
the new <code>order</code> parameter, with a default value of
<code>self.order</code>. The <code>create_dashboard()</code> method now
implements a new <code>_get_datasets()</code> method to retrieve
datasets from the dashboard folder and introduces a
<code>_get_widgets()</code> method, which accepts a list of files,
iterates over them, and yields tuples containing widgets and their
corresponding metadata, including the order. These improvements enable
the use of an order key in query headers, ensuring the correct order of
widgets in the dashboard creation process. Additionally, a new test case
has been added to verify the correct behavior of the dashboard
deployment with a specified order key in the query header. This feature
resolves issue <a
href="https://github.com/databrickslabs/lsql/issues/148">#148</a>.</li>
<li>Use widget width and height defined in query header (<a
href="https://github.com/databrickslabs/lsql/issues/147">#147</a>).
In this release, the handling of metadata in SQL files has been updated
to utilize the header of the file, instead of the first line, for
improved readability and flexibility. This change includes a new
WidgetMetadata class for defining the width and height of a widget in a
dashboard, as well as new methods for parsing the widget metadata from a
provided path. The release also includes updates to the documentation to
cover the supported widget arguments <code>-w or --width</code> and '-h
or --height', and resolves issue <a
href="https://github.com/databrickslabs/lsql/issues/114">#114</a>
by adding a test for deploying a dashboard with a big widget using a new
function <code>test_dashboard_deploys_dashboard_with_big_widget</code>.
Additionally, new test cases have been added for creating dashboards
with custom-sized widgets based on query header width and height values,
improving functionality and error handling.</li>
</ul>
<p>Dependency updates:</p>
<ul>
<li>Bump actions/checkout from 4.1.3 to 4.1.6 (<a
href="https://github.com/databrickslabs/lsql/pull/102">#102</a>).</li>
<li>Bump actions/checkout from 4.1.6 to 4.1.7 (<a
href="https://github.com/databrickslabs/lsql/pull/151">#151</a>).</li>
</ul>
<p>Contributors: <a
href="https://github.com/JCZuurmond"><code>@​JCZuurmond</code></a>, <a
href="https://github.com/nfx"><code>@​nfx</code></a>, <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot],
<a href="https://github.com/nkvuong"><code>@​nkvuong</code></a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/databrickslabs/lsql/blob/main/CHANGELOG.md">databricks-labs-lsql's
changelog</a>.</em></p>
<blockquote>
<h2>0.5.0</h2>
<ul>
<li>Added Command Execution backend which uses Command Execution API on
a cluster (<a
href="https://github.com/databrickslabs/lsql/issues/95">#95</a>).
In this release, the databricks labs lSQL library has been updated with
a new Command Execution backend that utilizes the Command Execution API.
A new <code>CommandExecutionBackend</code> class has been implemented,
which initializes a <code>CommandExecutor</code> instance taking a
cluster ID, workspace client, and language as parameters. The
<code>execute</code> method runs SQL commands on the specified cluster,
and the <code>fetch</code> method returns the query result as an
iterator of Row objects. The existing
<code>StatementExecutionBackend</code> class has been updated to inherit
from a new abstract base class called <code>ExecutionBackend</code>,
which includes a <code>save_table</code> method for saving data to
tables and is meant to be a common base class for both Statement and
Command Execution backends. The <code>StatementExecutionBackend</code>
class has also been updated to use the new <code>ExecutionBackend</code>
abstract class and its constructor now accepts a
<code>max_records_per_batch</code> parameter. The <code>execute</code>
and <code>fetch</code> methods have been updated to use the new
<code>_only_n_bytes</code> method for logging truncated SQL statements.
Additionally, the <code>CommandExecutionBackend</code> class has several
methods, <code>execute</code>, <code>fetch</code>, and
<code>save_table</code> to execute commands on a cluster and save the
results to tables in the databricks workspace. This new backend is
intended to be used for executing commands on a cluster and saving the
results in a databricks workspace.</li>
<li>Added basic integration with Lakeview Dashboards (<a
href="https://github.com/databrickslabs/lsql/issues/66">#66</a>).
In this release, we've added basic integration with Lakeview Dashboards
to the project, enhancing its capabilities. This includes updating the
<code>databricks-labs-blueprint</code> dependency to version 0.4.2 with
the <code>[yaml]</code> extra, allowing for additional functionality
related to handling YAML files. A new file, <code>dashboards.py</code>,
has been introduced, providing a class for interacting with Databricks
dashboards, along with methods for retrieving and saving dashboard
configurations. Additionally, a new <code>__init__.py</code> file under
the <code>src/databricks/labs/lsql/lakeview</code> directory imports all
classes and functions from the <code>model.py</code> module, providing a
foundation for further development and customization. The release also
introduces a new file, <code>model.py</code>, containing code generated
from OpenAPI specs by the Databricks SDK Generator, and a template file,
<code>model.py.tmpl</code>, used for handling JSON data during
integration with Lakeview Dashboards. A new file,
<code>polymorphism.py</code>, provides utilities for checking if a value
can be assigned to a specific type, supporting correct data typing and
formatting with Lakeview Dashboards. Furthermore, a
<code>.gitignore</code> file has been added to the
<code>tests/integration</code> directory as part of the initial steps in
adding integration testing to ensure compatibility with the Lakeview
Dashboards platform. Lastly, the <code>test_dashboards.py</code> file in
the <code>tests/integration</code> directory contains a function,
<code>test_load_dashboard(ws)</code>, which uses the
<code>Dashboards</code> class to save a dashboard from a source to a
destination path, facilitating testing during the integration
process.</li>
<li>Added dashboard-as-code functionality (<a
href="https://github.com/databrickslabs/lsql/issues/201">#201</a>).
This commit introduces dashboard-as-code functionality for the UCX
project, enabling the creation and management of dashboards using code.
The feature resolves multiple issues and includes a new
<code>create-dashboard</code> command for creating unpublished
dashboards. The functionality is available in the <code>lsql</code> lab
and allows for specifying the order and width of widgets, overriding
default widget identifiers, and supporting various SQL and markdown
header arguments. The <code>dashboard.yml</code> file is used to define
top-level metadata for the dashboard. This commit also includes
extensive documentation and examples for using the dashboard as a
library and configuring different options.</li>
<li>Automate opening integration test dashboard in debug mode (<a
href="https://github.com/databrickslabs/lsql/issues/167">#167</a>).
A new feature has been added to automatically open the integration test
dashboard in debug mode, making it easier for software engineers to
debug and troubleshoot. This has been achieved by importing the
<code>webbrowser</code> and <code>is_in_debug</code> modules from
&quot;databricks.labs.blueprint.entrypoint&quot;, and adding a check in
the <code>create</code> function to determine if the code is running in
debug mode. If it is, a dashboard URL is constructed from the workspace
configuration and dashboard ID, and then opened in a web browser using
&quot;webbrowser.open&quot;. This allows for a more streamlined
debugging process for the integration test dashboard. No other parts of
the code have been affected by this change.</li>
<li>Automatically tile widgets (<a
href="https://github.com/databrickslabs/lsql/issues/109">#109</a>).
In this release, we've introduced an automatic widget tiling feature for
the dashboard creation process in our open-source library. The
<code>Dashboards</code> class now includes a new class variable,
<code>_maximum_dashboard_width</code>, set to 6, representing the
maximum width allowed for each row of widgets in the dashboard. The
<code>create_dashboard</code> method has been updated to accept a new
<code>self</code> parameter, turning it into an instance method. A new
<code>_get_position</code> method has been introduced to calculate and
return the next available position for placing a widget, and a
<code>_get_width_and_height</code> method has been added to return the
width and height for a widget specification, initially handling
<code>CounterSpec</code> instances. Additionally, we've added new unit
tests to improve testing coverage, ensuring that widgets are created,
positioned, and sized correctly. These tests also cover the correct
positioning of widgets based on their order and available space, as well
as the expected width and height for each widget.</li>
<li>Bump actions/checkout from 4.1.3 to 4.1.6 (<a
href="https://github.com/databrickslabs/lsql/issues/102">#102</a>).
In the latest release, the 'actions/checkout' GitHub Action has been
updated from version 4.1.3 to 4.1.6, which includes checking the
platform to set the archive extension appropriately. This release also
bumps the version of github/codeql-action from 2 to 3,
actions/setup-node from 1 to 4, and actions/upload-artifact from 2 to 4.
Additionally, the minor-actions-dependencies group was updated with two
new versions. Disabling extensions.worktreeConfig when disabling
sparse-checkout was introduced in version 4.1.4. The release notes and
changelog for this update can be found in the provided link. This commit
was made by dependabot[bot] with contributions from cory-miller and
jww3.</li>
<li>Bump actions/checkout from 4.1.6 to 4.1.7 (<a
href="https://github.com/databrickslabs/lsql/issues/151">#151</a>).
In the latest release, the 'actions/checkout' GitHub action has been
updated from version 4.1.6 to 4.1.7 in the project's push workflow,
which checks out the repository at the start of the workflow. This
change brings potential bug fixes, performance improvements, or new
features compared to the previous version. The update only affects the
version number in the YAML configuration for the 'actions/checkout' step
in the release.yml file, with no new methods or alterations to existing
functionality. This update aims to ensure a smooth and enhanced user
experience for those utilizing the project's push workflows by taking
advantage of the possible improvements or bug fixes in the new version
of 'actions/checkout'.</li>
<li>Create a dashboard with a counter from a single query (<a
href="https://github.com/databrickslabs/lsql/issues/107">#107</a>).
In this release, we have introduced several enhancements to our
dashboard-as-code approach, including the creation of a
<code>Dashboards</code> class that provides methods for getting, saving,
and deploying dashboards. A new method, <code>create_dashboard</code>,
has been added to create a dashboard with a single page containing a
counter widget. The counter widget is associated with a query that
counts the number of rows in a specified dataset. The
<code>deploy_dashboard</code> method has also been added to deploy the
dashboard to the workspace. Additionally, we have implemented a new
feature for creating dashboards with a counter from a single query,
including modifications to the <code>test_dashboards.py</code> file and
the addition of four new tests. These changes improve the robustness of
the dashboard creation process and provide a more automated way to view
important metrics.</li>
<li>Create text widget from markdown file (<a
href="https://github.com/databrickslabs/lsql/issues/142">#142</a>).
A new feature has been implemented in the library that allows for the
creation of a text widget from a markdown file, enhancing customization
and readability for users. This development resolves issue <a
href="https://github.com/databrickslabs/lsql/issues/1">#1</a></li>
<li>Design document for dashboards-as-code (<a
href="https://github.com/databrickslabs/lsql/issues/105">#105</a>).
&quot;The latest release introduces 'Dashboards as Code,' a method for
defining and managing dashboards through configuration files, enabling
version control and controlled changes. The building blocks include
<code>.sql</code>, <code>.md</code>, and <code>dashboard.yml</code>
files, with <code>.sql</code> defining queries and determining tile
order, and <code>dashboard.yml</code> specifying top-level metadata and
tile overrides. Metadata can be inferred or explicitly defined in the
query or files. The tile order can be determined by SQL file order,
<code>tiles</code> order in <code>dashboard.yml</code>, or SQL file
metadata. This project can also be used as a library for embedding
dashboard generation in your code. Configuration precedence follows
command-line flags, SQL file headers, <code>dashboard.yml</code>, and
SQL query content. The command-line interface is utilized for dashboard
generation from configuration files.&quot;</li>
<li>Ensure propagation of <code>lsql</code> version into
<code>User-Agent</code> header when it is used as library (<a
href="https://github.com/databrickslabs/lsql/issues/206">#206</a>).
In this release, the <code>pyproject.toml</code> file has been updated
to ensure that the correct version of the <code>lsql</code> library is
propagated into the <code>User-Agent</code> header when used as a
library, improving attribution. The <code>databricks-sdk</code> version
has been updated from <code>0.22.0</code> to <code>0.29.0</code>, and
the <code>__init__.py</code> file of the <code>lsql</code> library has
been modified to add the <code>with_user_agent_extra</code> function
from the <code>databricks.sdk.core</code> package for correct
attribution. The <code>backends.py</code> file has also been updated
with improved type handling in the <code>_row_to_sql</code> and
<code>save_table</code> functions for accurate SQL insertion and
handling of user-defined classes. Additionally, a test has been added to
ensure that the <code>lsql</code> version is correctly propagated in the
<code>User-Agent</code> header when used as a library. These changes
offer improved functionality and accurate type handling, making it
easier for developers to identify the library version when used in other
projects.</li>
<li>Fixed counter encodings (<a
href="https://github.com/databrickslabs/lsql/issues/143">#143</a>).
In this release, we have improved the encoding of counters in the lsql
dashboard by modifying the <code>create_dashboard</code> function in the
<code>dashboards.py</code> file. Previously, the counter field encoding
was hardcoded as &quot;count,&quot; but has been changed to dynamically
determine the first field name of the given fields, ensuring that
counters are expected to have only one field. Additionally, a new
integration test has been added to the
<code>tests/integration/test_dashboards.py</code> file to ensure that
the dashboard deployment functionality correctly handles SQL queries
that do not perform a count. A new test for the <code>Dashboards</code>
class has also been added to check that counter field encoding names are
created as expected. The <code>WorkspaceClient</code> is mocked and not
called in this test. These changes enhance the accuracy of counter
encoding and improve the overall functionality and reliability of the
lsql dashboard.</li>
<li>Fixed non-existing reference and typo in the documentation (<a
href="https://github.com/databrickslabs/lsql/issues/104">#104</a>).
In this release, we've made improvements to the documentation of our
open-source library, specifically addressing issue <a
href="https://github.com/databrickslabs/lsql/issues/104">#104</a>.
The changes include fixing a non-existent reference and a typo in the
<code>Library size comparison</code> section of the
&quot;comparison.md&quot; document. This section provides guidance for
selecting a library based on factors like library size, unified
authentication, and compatibility with various Databricks warehouses and
SQL Python APIs. The updates clarify the required dependency size for
simple applications and scripts, and offer more detailed information
about each library option. We've also added a new subsection titled
<code>Detailed comparison</code> to provide a more comprehensive
overview of each library's features. These changes are intended to help
software engineers better understand which library is best suited for
their specific needs, particularly for applications that require data
transfer of large amounts of data serialized in Apache Arrow format and
low result fetching latency, where we recommend using the Databricks SQL
Connector for Python for efficient data transfer and low latency.</li>
<li>Fixed parsing message (<a
href="https://github.com/databrickslabs/lsql/issues/146">#146</a>).
In this release, the warning message logged during the creation of a
dashboard when a ParseError occurs has been updated to provide clearer
and more detailed information about the parsing error. The new error
message now includes the specific query being parsed and the exact
parsing error, enabling developers to quickly identify the cause of
parsing issues. This change ensures that engineers can efficiently
diagnose and address parsing errors, improving the overall development
and debugging experience with a more informative log format:
&quot;Parsing {query}: {error}&quot;.</li>
<li>Improve dashboard as code (<a
href="https://github.com/databrickslabs/lsql/issues/108">#108</a>).
The <code>Dashboards</code> class in the 'dashboards.py' file has been
updated to improve functionality and usability, with changes such as the
addition of a type variable <code>T</code> for type checking and more
descriptive names for methods. The <code>save_to_folder</code> method
now accepts a <code>Dashboard</code> object and returns a
<code>Dashboard</code> object, and a new static method
<code>create_dashboard</code> has been added. Additionally, two new
methods <code>_with_better_names</code> and <code>_replace_names</code>
have been added for improved readability. The <code>get_dashboard</code>
method now returns a <code>Dashboard</code> object instead of a
dictionary. The <code>save_to_folder</code> method now also formats SQL
code before saving it to file. These changes aim to enhance the
functionality and readability of the codebase and provide more
user-friendly methods for interacting with the <code>Dashboards</code>
class. In addition to the changes in the <code>Dashboards</code> class,
there have been updates in the organization of the project structure.
The 'queries/counter.sql' file has been moved to
'dashboards/one_counter/counter.sql' in the 'tests/integration'
directory. This modification enhances the organization of the project.
Furthermore, several tests for the <code>Dashboards</code> class have
been introduced in the 'databricks.labs.lsql.dashboards' module,
demonstrating various functionalities of the class and ensuring that it
functions as intended. The tests cover saving SQL and YML files to a
specified folder, creating a dataset and a counter widget for each
query, deploying dashboards with a given display name or dashboard ID,
and testing the behavior of the <code>save_to_folder</code> and
<code>deploy_dashboard</code> methods. Lastly, the commit removes the
<code>test_load_dashboard</code> function and updates the
<code>test_dashboard_creates_one_dataset_per_query</code> and
<code>test_dashboard_creates_one_counter_widget_per_query</code>
functions to use the updated <code>Dashboard</code> class. A new
<code>replace_recursively</code> function is introduced to replace
specific fields in a dataclass recursively. A new test function
<code>test_dashboards_deploys_exported_dashboard_definition</code> has
been added, which reads a dashboard definition from a JSON file, deploys
it, and checks if it's successfully deployed using the
<code>Dashboards</code> class. A new test function
<code>test_dashboard_deploys_dashboard_the_same_as_created_dashboard</code>
has also been added, which compares the original and deployed dashboards
to ensure they are identical. Overall, these changes aim to improve the
functionality and readability of the codebase and provide more
user-friendly methods for interacting with the <code>Dashboards</code>
class, as well as enhance the organization of the project structure and
add new tests for the <code>Dashboards</code> class to ensure it
functions as intended.</li>
<li>Infer fields from a query (<a
href="https://github.com/databrickslabs/lsql/issues/111">#111</a>).
The <code>Dashboards</code> class in the <code>dashboards.py</code> file
has been updated with the addition of a new method,
<code>_get_fields</code>, which accepts a SQL query as input and returns
a list of <code>Field</code> objects using the <code>sqlglot</code>
library to parse the query and extract the necessary information. The
<code>create_dashboard</code> method has been modified to call this new
function when creating <code>Query</code> objects for each dataset. If a
<code>ParseError</code> occurs, a warning is logged and iteration
continues. This allows for the automatic population of fields when
creating a new dashboard, eliminating the need for manual specification.
Additionally, new tests have been added for invalid queries and for
checking if the fields in a query have the expected names. These tests
include <code>test_dashboards_skips_invalid_query</code> and
<code>test_dashboards_gets_fields_with_expected_names</code>, which
utilize the caplog fixture and create temporary query files to verify
functionality. Existing functionality related to creating dashboards
remains unchanged.</li>
<li>Make constant all caps (<a
href="https://github.com/databrickslabs/lsql/issues/140">#140</a>).
In this release, the project's 'dashboards.py' file has been updated to
improve code readability and maintainability. A constant variable
<code>_maximum_dashboard_width</code> has been changed to all caps,
becoming '_MAXIMUM_DASHBOARD_WIDTH'. This modification affects the
<code>Dashboards</code> class and its methods, particularly
<code>_get_fields</code> and '_get_position'. The
<code>_get_position</code> method has been revised to use the new all
caps constant variable. This change ensures better visibility of
constants within the code, addressing issue <a
href="https://github.com/databrickslabs/lsql/issues/140">#140</a>.
It's important to note that this modification only impacts the
'dashboards.py' file and does not affect any other functionalities.</li>
<li>Read display name from <code>dashboard.yml</code> (<a
href="https://github.com/databrickslabs/lsql/issues/144">#144</a>).
In this release, we have introduced a new <code>DashboardMetadata</code>
dataclass that reads the display name of a dashboard from a
<code>dashboard.yml</code> file located in the dashboard's directory. If
the <code>dashboard.yml</code> file is absent, the folder name will be
used as the display name. This change improves the readability and
maintainability of the dashboard configuration by explicitly defining
the display name and reducing the need to specify widget information in
multiple places. We have also added a new fixture called
<code>make_dashboard</code> for creating and cleaning up lakeview
dashboards in the test suite. The fixture handles creation and deletion
of the dashboard and provides an option to set a custom display name.
Additionally, we have added and modified several unit tests to ensure
the proper handling of the <code>DashboardMetadata</code> class and the
dashboard creation process, including tests for missing, present, or
incorrect <code>display_name</code> keys in the YAML file. The
<code>dashboards.deploy_dashboard()</code> function has been updated to
handle cases where only <code>dashboard_id</code> is provided.</li>
<li>Set widget id in query header (<a
href="https://github.com/databrickslabs/lsql/issues/154">#154</a>).
In this release, we've made significant improvements to widget metadata
handling in our open-source library. We've introduced a new
<code>WidgetMetadata</code> class that replaces the previous
<code>WidgetMetadata</code> dataclass, now featuring a <code>path</code>
attribute, <code>spec_type</code> property, and optional parameters for
<code>order</code>, <code>width</code>, <code>height</code>, and
<code>_id</code>. The <code>_get_widgets</code> method has been updated
to accept an Iterable of <code>WidgetMetadata</code> objects, and both
<code>_get_layouts</code> and <code>_get_widgets</code> methods now sort
widgets using the order field. A new class method,
<code>WidgetMetadata.from_path</code>, handles parsing widget metadata
from a file path, replacing the removed
<code>_get_width_and_height</code> method. Additionally, the
<code>WidgetMetadata</code> class is now used in the
<code>deploy_dashboard</code> method, and the test suite for the
<code>dashboards</code> module has been enhanced with updated
<code>test_widget_metadata_replaces_width_and_height</code> and
<code>test_widget_metadata_replaces_attribute</code> functions, as well
as new tests for specific scenarios. Issue <a
href="https://github.com/databrickslabs/lsql/issues/154">#154</a>
has been addressed by setting the widget id in the query header, and the
aforementioned changes improve flexibility and ease of use for dashboard
development.</li>
<li>Use order key in query header if defined (<a
href="https://github.com/databrickslabs/lsql/issues/149">#149</a>).
In this release, we've introduced a new feature to use an order key in
the query header if defined, enhancing the flexibility and control over
the dashboard creation process. The <code>WidgetMetadata</code>
dataclass now includes an optional <code>order</code> parameter of type
<code>int</code>, and the <code>_get_arguments_parser()</code> method
accepts the <code>--order</code> flag with type <code>int</code>. The
<code>replace_from_arguments()</code> method has been updated to support
the new <code>order</code> parameter, with a default value of
<code>self.order</code>. The <code>create_dashboard()</code> method now
implements a new <code>_get_datasets()</code> method to retrieve
datasets from the dashboard folder and introduces a
<code>_get_widgets()</code> method, which accepts a list of files,
iterates over them, and yields tuples containing widgets and their
corresponding metadata, including the order. These improvements enable
the use of an order key in query headers, ensuring the correct order of
widgets in the dashboard creation process. Additionally, a new test case
has been added to verify the correct behavior of the dashboard
deployment with a specified order key in the query header. This feature
resolves issue <a
href="https://github.com/databrickslabs/lsql/issues/148">#148</a>.</li>
<li>Use widget width and height defined in query header (<a
href="https://github.com/databrickslabs/lsql/issues/147">#147</a>).
In this release, the handling of metadata in SQL files has been updated
to utilize the header of the file, instead of the first line, for
improved readability and flexibility. This change includes a new
WidgetMetadata class for defining the width and height of a widget in a
dashboard, as well as new methods for parsing the widget metadata from a
provided path. The release also includes updates to the documentation to
cover the supported widget arguments <code>-w or --width</code> and '-h
or --height', and resolves issue <a
href="https://github.com/databrickslabs/lsql/issues/114">#114</a>
by adding a test for deploying a dashboard with a big widget using a new
function <code>test_dashboard_deploys_dashboard_with_big_widget</code>.
Additionally, new test cases have been added for creating dashboards
with custom-sized widgets based on query header width and height values,
improving functionality and error handling.</li>
</ul>
<p>Dependency updates:</p>
<ul>
<li>Bump actions/checkout from 4.1.3 to 4.1.6 (<a
href="https://github.com/databrickslabs/lsql/pull/102">#102</a>).</li>
<li>Bump actions/checkout from 4.1.6 to 4.1.7 (<a
href="https://github.com/databrickslabs/lsql/pull/151">#151</a>).</li>
</ul>
<h2>0.4.3</h2>
<ul>
<li>Bump actions/checkout from 4.1.2 to 4.1.3 (<a
href="https://github.com/databrickslabs/lsql/issues/97">#97</a>).
The <code>actions/checkout</code> dependency has been updated from
version 4.1.2 to 4.1.3 in the <code>update-main-version.yml</code> file.
This new version includes a check to verify the git version before
attempting to disable <code>sparse-checkout</code>, and adds an SSH user
parameter to improve functionality and compatibility. The release notes
and CHANGELOG.md file provide detailed information on the specific
changes and improvements. The pull request also includes a detailed
commit history and links to corresponding issues and pull requests on
GitHub for transparency. You can review and merge the pull request to
update the <code>actions/checkout</code> dependency in your
project.</li>
<li>Maintain PySpark compatibility for databricks.labs.lsql.core.Row (<a
href="https://github.com/databrickslabs/lsql/issues/99">#99</a>).
In this release, we have added a new method <code>asDict</code> to the
<code>Row</code> class in the <code>databricks.labs.lsql.core</code>
module to maintain compatibility with PySpark. This method returns a
dictionary representation of the <code>Row</code> object, with keys
corresponding to column names and values corresponding to the values in
each column. Additionally, we have modified the <code>fetch</code>
function in the <code>backends.py</code> file to return <code>Row</code>
objects of <code>pyspark.sql</code> when using
<code>self._spark.sql(sql).collect()</code>. This change is temporary
and marked with a <code>TODO</code> comment, indicating that it will be
addressed in the future. We have also added error handling code in the
<code>fetch</code> function to ensure the function operates as expected.
The <code>asDict</code> method in this implementation simply calls the
existing <code>as_dict</code> method, meaning the behavior of the
<code>asDict</code> method is identical to the <code>as_dict</code>
method. The <code>as_dict</code> method returns a dictionary
representation of the <code>Row</code> object, with keys corresponding
to column names and values corresponding to the values in each column.
The optional <code>recursive</code> argument in the <code>asDict</code>
method, when set to <code>True</code>, enables recursive conversion of
nested <code>Row</code> objects to nested dictionaries. However, this
behavior is not currently implemented, and the <code>recursive</code>
argument is always <code>False</code> by default.</li>
</ul>
<p>Dependency updates:</p>
<ul>
<li>Bump actions/checkout from 4.1.2 to 4.1.3 (<a
href="https://github.com/databrickslabs/lsql/pull/97">#97</a>).</li>
</ul>
<h2>0.4.2</h2>
<ul>
<li>Added more <code>NotFound</code> error type (<a
href="https://github.com/databrickslabs/lsql/issues/94">#94</a>).
In the latest update, the <code>core.py</code> file in the
<code>databricks/labs/lsql</code> package has undergone enhancements to
the error handling functionality. The <code>_raise_if_needed</code>
function has been modified to raise a <code>NotFound</code> error when
the error message includes the phrase &quot;does not exist&quot;. This
update enables the system to categorize specific SQL query errors as
<code>NotFound</code> error messages, thereby improving the overall
error handling and reporting capabilities. This change was a
collaborative effort, as indicated by the co-authored-by statement in
the commit.</li>
</ul>
<h2>0.4.1</h2>
<ul>
<li>Fixing ovewrite integration tests (<a
href="https://github.com/databrickslabs/lsql/issues/92">#92</a>).
A new enhancement has been implemented for the <code>overwrite</code>
feature's integration tests, addressing a concern with write operations.
Two new variables, <code>catalog</code> and &quot;schema&quot;, have
been incorporated using the <code>env_or_skip</code> function. These
variables are utilized in the <code>save_table</code> method, which is
now invoked twice with the same table, once with the <code>append</code>
and once with the <code>overwrite</code> option. The data in the table
is retrieved and checked for accuracy after each call, employing the
updated <code>Row</code> class with revised field names
<code>first</code> and &quot;second&quot;, formerly <code>name</code>
and &quot;id&quot;. This modification ensures the proper operation of
the <code>overwrite</code> feature during integration tests and resolves
any related issues. The commit message <code>Fixing overwrite
integration tests</code> signifies this change.</li>
</ul>
<h2>0.4.0</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/databrickslabs/lsql/commit/619ff0a6b90742051e56d3d5e6eaf7255cc03834"><code>619ff0a</code></a>
Release v0.5.0 (<a
href="https://github.com/databrickslabs/lsql/issues/207">#207</a>)</li>
<li><a
href="https://github.com/databrickslabs/lsql/commit/4990ce190f89276d00e2af2926d41a671a916616"><code>4990ce1</code></a>
Ensure propagation of <code>lsql</code> version into
<code>User-Agent</code> header when it is used...</li>
<li><a
href="https://github.com/databrickslabs/lsql/commit/56e7f70b14479d7e33e1389ed24d22c92facd6f0"><code>56e7f70</code></a>
Add dashboard-as-code functionality (<a
href="https://github.com/databrickslabs/lsql/issues/201">#201</a>)</li>
<li><a
href="https://github.com/databrickslabs/lsql/commit/f1bbf54ed96afe5b1a40e33fca855d5e78652d4a"><code>f1bbf54</code></a>
Automate opening integration test dashboard in debug mode (<a
href="https://github.com/databrickslabs/lsql/issues/167">#167</a>)</li>
<li><a
href="https://github.com/databrickslabs/lsql/commit/a79d40fa3bc4b5a82e8e51e83a11ab166eb7749f"><code>a79d40f</code></a>
Set widget id in query header (<a
href="https://github.com/databrickslabs/lsql/issues/154">#154</a>)</li>
<li><a
href="https://github.com/databrickslabs/lsql/commit/8824273dfc0ac3b5b93826fd59db55dc4936f496"><code>8824273</code></a>
Bump actions/checkout from 4.1.6 to 4.1.7 (<a
href="https://github.com/databrickslabs/lsql/issues/151">#151</a>)</li>
<li><a
href="https://github.com/databrickslabs/lsql/commit/165594d1405e9c77d0a91f2fe42e22e77175bf60"><code>165594d</code></a>
Use order key in query header if defined (<a
href="https://github.com/databrickslabs/lsql/issues/149">#149</a>)</li>
<li><a
href="https://github.com/databrickslabs/lsql/commit/40e46e1fef311b1930be7dd558da2948d57a744d"><code>40e46e1</code></a>
Use widget width and height defined in query header (<a
href="https://github.com/databrickslabs/lsql/issues/147">#147</a>)</li>
<li><a
href="https://github.com/databrickslabs/lsql/commit/2a9467304e6ab347aadf02c352a67628d025bd49"><code>2a94673</code></a>
Fix parsing message (<a
href="https://github.com/databrickslabs/lsql/issues/146">#146</a>)</li>
<li><a
href="https://github.com/databrickslabs/lsql/commit/1575f9fb1dd409454aafb241c1533b4f15e306a6"><code>1575f9f</code></a>
Read display name from <code>dashboard.yml</code> (<a
href="https://github.com/databrickslabs/lsql/issues/144">#144</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/databrickslabs/lsql/compare/v0.4.0...v0.5.0">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
  • Loading branch information
dependabot[bot] authored Jul 4, 2024
1 parent f729f0c commit 6753bbd
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ classifiers = [
]

dependencies = ["databricks-sdk>=0.27,<0.30",
"databricks-labs-lsql~=0.4.0",
"databricks-labs-lsql>=0.4,<0.6",
"databricks-labs-blueprint>=0.6.0",
"PyYAML>=6.0.0,<7.0.0",
"sqlglot>=25.4.1,<25.5",
Expand Down

0 comments on commit 6753bbd

Please sign in to comment.