Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update databricks-labs-lsql requirement from <0.7,>=0.5 to >=0.5,<0.8 #2189

Merged
merged 1 commit into from
Jul 16, 2024

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Jul 16, 2024

Updates the requirements on databricks-labs-lsql to permit the latest version.

Release notes

Sourced from databricks-labs-lsql's releases.

v0.7.1

  • Bump sigstore/gh-action-sigstore-python from 2.1.1 to 3.0.0 (#224). In version 3.0.0 of sigstore/gh-action-sigstore-python, several changes, additions, and removals have been implemented. Notably, certain settings such as fulcio-url, rekor-url, ctfe, and rekor-root-pubkey have been removed. Additionally, the output settings signature, certificate, and bundle have also been removed. The inputs are now parsed according to POSIX shell lexing rules for better consistency. The release-signing-artifacts setting no longer causes a hard error when used under the incorrect event. Furthermore, various deprecations present in sigstore-python's 2.x series have been resolved. The default suffix has been changed from .sigstore to .sigstore.json, in line with Sigstore's client specification. The release-signing-artifacts setting now defaults to true. This version also includes several bug fixes and improvements to support CI runners that use PEP 668 to constrain global package prefixes.
  • Use default factory to create Tile._position (#226). In this change, the default value creation for the _position field in various classes including Tile, MarkdownTile, TableTile, and CounterTile has been updated. Previously, a new Position object was explicitly created for the default value. With this update, the default_factory argument of the dataclasses.field function is now used to create a new Position object. This change is made in anticipation of the Python 3.11 release, which modifies the field default mutability check behavior. By utilizing the default_factory approach, we ensure that a new Position object is generated during each instance creation, rather than reusing a single default instance. This guarantees the immutability of default values and aligns with best practices for forward-compatibility with future Python versions. It is important to note that this modification does not affect the functionality of the classes but enhances their initialization process.

Dependency updates:

  • Bump sigstore/gh-action-sigstore-python from 2.1.1 to 3.0.0 (#224).

Contributors: @​JCZuurmond, @​dependabot[bot], @​nfx

Changelog

Sourced from databricks-labs-lsql's changelog.

0.7.1

  • Bump sigstore/gh-action-sigstore-python from 2.1.1 to 3.0.0 (#224). In version 3.0.0 of sigstore/gh-action-sigstore-python, several changes, additions, and removals have been implemented. Notably, certain settings such as fulcio-url, rekor-url, ctfe, and rekor-root-pubkey have been removed. Additionally, the output settings signature, certificate, and bundle have also been removed. The inputs are now parsed according to POSIX shell lexing rules for better consistency. The release-signing-artifacts setting no longer causes a hard error when used under the incorrect event. Furthermore, various deprecations present in sigstore-python's 2.x series have been resolved. The default suffix has been changed from .sigstore to .sigstore.json, in line with Sigstore's client specification. The release-signing-artifacts setting now defaults to true. This version also includes several bug fixes and improvements to support CI runners that use PEP 668 to constrain global package prefixes.
  • Use default factory to create Tile._position (#226). In this change, the default value creation for the _position field in various classes including Tile, MarkdownTile, TableTile, and CounterTile has been updated. Previously, a new Position object was explicitly created for the default value. With this update, the default_factory argument of the dataclasses.field function is now used to create a new Position object. This change is made in anticipation of the Python 3.11 release, which modifies the field default mutability check behavior. By utilizing the default_factory approach, we ensure that a new Position object is generated during each instance creation, rather than reusing a single default instance. This guarantees the immutability of default values and aligns with best practices for forward-compatibility with future Python versions. It is important to note that this modification does not affect the functionality of the classes but enhances their initialization process.

Dependency updates:

  • Bump sigstore/gh-action-sigstore-python from 2.1.1 to 3.0.0 (#224).

0.7.0

  • Added databricks labs lsql fmt command (#221). The commit introduces a new command, databricks labs lsql fmt, to the open-source library, which formats SQL files in a given folder using the Databricks SDK. This command can be used without authentication and accepts a folder flag, which specifies the directory containing SQL files to format. The change also updates the labs.yml file and includes a new method, format, in the QueryTile class, which formats SQL queries using the sqlglot library. This commit enhances the functionality of the CLI for SQL file formatting and improves the readability and consistency of SQL files, making it easier for developers to understand and maintain the code. Additionally, the commit includes changes to various SQL files to demonstrate the improved formatting, such as converting SQL keywords to uppercase, adding appropriate spacing around keywords and operators, and aligning column names in the VALUES clause. The purpose of this change is to ensure that the formatting method works correctly and does not introduce any issues in the existing functionality.

0.6.0

  • Added method to dashboards to get dashboard url (#211). In this release, we have added a new method get_url to the lakeview_dashboards object in the laksedashboard library. This method utilizes the Databricks SDK to retrieve the dashboard URL, simplifying the code and making it more maintainable. Previously, the dashboard URL was constructed by concatenating the host and dashboard ID, but this new method ensures that the URL is obtained correctly, even if the format changes in the future. Additionally, a new unit test has been added for a method that gets the dashboard URL using the workspace client. This new functionality allows users to easily retrieve the URL for a dashboard using its ID and the workspace client.
  • Extend replace database in query (#210). This commit extends the database replacement functionality in the DashboardMetadata class, allowing users to specify which database and catalog to replace. The enhancement includes support for catalog replacement and a new replace_database method in the DashboardMetadata class, which replaces the catalog and/or database in the query based on provided parameters. These changes enhance the flexibility and customization of the database replacement feature in queries, making it easier for users to control how their data is displayed in the dashboard. The create_dashboard function has also been updated to use the new method for replacing the database and catalog. Additionally, the TileMetadata update method has been replaced with a new merge method, and the QueryTile and Tile classes have new properties and methods for handling content, width, height, and position. The commit also includes several unit tests to ensure the new functionality works as expected.
  • Improve object oriented dashboard-as-code implementation (#208). In this release, the object-oriented implementation of the dashboard-as-code feature has been significantly improved, addressing previous pull request comments (#201). The TileMetadata dataclass now includes methods for updating and comparing tile metadata, and the DashboardMetadata class has been removed and its functionality incorporated into the Dashboards class. The Dashboards class now generates tiles, datasets, and layouts for dashboards using the provided query_transformer. The code's readability and maintainability have been further enhanced by replacing the use of the copy module with dataclasses.replace for creating object copies. Additionally, updates have been made to the unit tests for dashboard functionality in the project, with new methods and attributes added to check for valid dashboard metadata and handle duplicate query or widget IDs, as well as to specify the order in which tiles and widgets should be displayed in the dashboard.

0.5.0

  • Added Command Execution backend which uses Command Execution API on a cluster (#95). In this release, the databricks labs lSQL library has been updated with a new Command Execution backend that utilizes the Command Execution API. A new CommandExecutionBackend class has been implemented, which initializes a CommandExecutor instance taking a cluster ID, workspace client, and language as parameters. The execute method runs SQL commands on the specified cluster, and the fetch method returns the query result as an iterator of Row objects. The existing StatementExecutionBackend class has been updated to inherit from a new abstract base class called ExecutionBackend, which includes a save_table method for saving data to tables and is meant to be a common base class for both Statement and Command Execution backends. The StatementExecutionBackend class has also been updated to use the new ExecutionBackend abstract class and its constructor now accepts a max_records_per_batch parameter. The execute and fetch methods have been updated to use the new _only_n_bytes method for logging truncated SQL statements. Additionally, the CommandExecutionBackend class has several methods, execute, fetch, and save_table to execute commands on a cluster and save the results to tables in the databricks workspace. This new backend is intended to be used for executing commands on a cluster and saving the results in a databricks workspace.
  • Added basic integration with Lakeview Dashboards (#66). In this release, we've added basic integration with Lakeview Dashboards to the project, enhancing its capabilities. This includes updating the databricks-labs-blueprint dependency to version 0.4.2 with the [yaml] extra, allowing for additional functionality related to handling YAML files. A new file, dashboards.py, has been introduced, providing a class for interacting with Databricks dashboards, along with methods for retrieving and saving dashboard configurations. Additionally, a new __init__.py file under the src/databricks/labs/lsql/lakeview directory imports all classes and functions from the model.py module, providing a foundation for further development and customization. The release also introduces a new file, model.py, containing code generated from OpenAPI specs by the Databricks SDK Generator, and a template file, model.py.tmpl, used for handling JSON data during integration with Lakeview Dashboards. A new file, polymorphism.py, provides utilities for checking if a value can be assigned to a specific type, supporting correct data typing and formatting with Lakeview Dashboards. Furthermore, a .gitignore file has been added to the tests/integration directory as part of the initial steps in adding integration testing to ensure compatibility with the Lakeview Dashboards platform. Lastly, the test_dashboards.py file in the tests/integration directory contains a function, test_load_dashboard(ws), which uses the Dashboards class to save a dashboard from a source to a destination path, facilitating testing during the integration process.
  • Added dashboard-as-code functionality (#201). This commit introduces dashboard-as-code functionality for the UCX project, enabling the creation and management of dashboards using code. The feature resolves multiple issues and includes a new create-dashboard command for creating unpublished dashboards. The functionality is available in the lsql lab and allows for specifying the order and width of widgets, overriding default widget identifiers, and supporting various SQL and markdown header arguments. The dashboard.yml file is used to define top-level metadata for the dashboard. This commit also includes extensive documentation and examples for using the dashboard as a library and configuring different options.
  • Automate opening integration test dashboard in debug mode (#167). A new feature has been added to automatically open the integration test dashboard in debug mode, making it easier for software engineers to debug and troubleshoot. This has been achieved by importing the webbrowser and is_in_debug modules from "databricks.labs.blueprint.entrypoint", and adding a check in the create function to determine if the code is running in debug mode. If it is, a dashboard URL is constructed from the workspace configuration and dashboard ID, and then opened in a web browser using "webbrowser.open". This allows for a more streamlined debugging process for the integration test dashboard. No other parts of the code have been affected by this change.
  • Automatically tile widgets (#109). In this release, we've introduced an automatic widget tiling feature for the dashboard creation process in our open-source library. The Dashboards class now includes a new class variable, _maximum_dashboard_width, set to 6, representing the maximum width allowed for each row of widgets in the dashboard. The create_dashboard method has been updated to accept a new self parameter, turning it into an instance method. A new _get_position method has been introduced to calculate and return the next available position for placing a widget, and a _get_width_and_height method has been added to return the width and height for a widget specification, initially handling CounterSpec instances. Additionally, we've added new unit tests to improve testing coverage, ensuring that widgets are created, positioned, and sized correctly. These tests also cover the correct positioning of widgets based on their order and available space, as well as the expected width and height for each widget.
  • Bump actions/checkout from 4.1.3 to 4.1.6 (#102). In the latest release, the 'actions/checkout' GitHub Action has been updated from version 4.1.3 to 4.1.6, which includes checking the platform to set the archive extension appropriately. This release also bumps the version of github/codeql-action from 2 to 3, actions/setup-node from 1 to 4, and actions/upload-artifact from 2 to 4. Additionally, the minor-actions-dependencies group was updated with two new versions. Disabling extensions.worktreeConfig when disabling sparse-checkout was introduced in version 4.1.4. The release notes and changelog for this update can be found in the provided link. This commit was made by dependabot[bot] with contributions from cory-miller and jww3.
  • Bump actions/checkout from 4.1.6 to 4.1.7 (#151). In the latest release, the 'actions/checkout' GitHub action has been updated from version 4.1.6 to 4.1.7 in the project's push workflow, which checks out the repository at the start of the workflow. This change brings potential bug fixes, performance improvements, or new features compared to the previous version. The update only affects the version number in the YAML configuration for the 'actions/checkout' step in the release.yml file, with no new methods or alterations to existing functionality. This update aims to ensure a smooth and enhanced user experience for those utilizing the project's push workflows by taking advantage of the possible improvements or bug fixes in the new version of 'actions/checkout'.
  • Create a dashboard with a counter from a single query (#107). In this release, we have introduced several enhancements to our dashboard-as-code approach, including the creation of a Dashboards class that provides methods for getting, saving, and deploying dashboards. A new method, create_dashboard, has been added to create a dashboard with a single page containing a counter widget. The counter widget is associated with a query that counts the number of rows in a specified dataset. The deploy_dashboard method has also been added to deploy the dashboard to the workspace. Additionally, we have implemented a new feature for creating dashboards with a counter from a single query, including modifications to the test_dashboards.py file and the addition of four new tests. These changes improve the robustness of the dashboard creation process and provide a more automated way to view important metrics.
  • Create text widget from markdown file (#142). A new feature has been implemented in the library that allows for the creation of a text widget from a markdown file, enhancing customization and readability for users. This development resolves issue #1
  • Design document for dashboards-as-code (#105). "The latest release introduces 'Dashboards as Code,' a method for defining and managing dashboards through configuration files, enabling version control and controlled changes. The building blocks include .sql, .md, and dashboard.yml files, with .sql defining queries and determining tile order, and dashboard.yml specifying top-level metadata and tile overrides. Metadata can be inferred or explicitly defined in the query or files. The tile order can be determined by SQL file order, tiles order in dashboard.yml, or SQL file metadata. This project can also be used as a library for embedding dashboard generation in your code. Configuration precedence follows command-line flags, SQL file headers, dashboard.yml, and SQL query content. The command-line interface is utilized for dashboard generation from configuration files."
  • Ensure propagation of lsql version into User-Agent header when it is used as library (#206). In this release, the pyproject.toml file has been updated to ensure that the correct version of the lsql library is propagated into the User-Agent header when used as a library, improving attribution. The databricks-sdk version has been updated from 0.22.0 to 0.29.0, and the __init__.py file of the lsql library has been modified to add the with_user_agent_extra function from the databricks.sdk.core package for correct attribution. The backends.py file has also been updated with improved type handling in the _row_to_sql and save_table functions for accurate SQL insertion and handling of user-defined classes. Additionally, a test has been added to ensure that the lsql version is correctly propagated in the User-Agent header when used as a library. These changes offer improved functionality and accurate type handling, making it easier for developers to identify the library version when used in other projects.
  • Fixed counter encodings (#143). In this release, we have improved the encoding of counters in the lsql dashboard by modifying the create_dashboard function in the dashboards.py file. Previously, the counter field encoding was hardcoded as "count," but has been changed to dynamically determine the first field name of the given fields, ensuring that counters are expected to have only one field. Additionally, a new integration test has been added to the tests/integration/test_dashboards.py file to ensure that the dashboard deployment functionality correctly handles SQL queries that do not perform a count. A new test for the Dashboards class has also been added to check that counter field encoding names are created as expected. The WorkspaceClient is mocked and not called in this test. These changes enhance the accuracy of counter encoding and improve the overall functionality and reliability of the lsql dashboard.
  • Fixed non-existing reference and typo in the documentation (#104). In this release, we've made improvements to the documentation of our open-source library, specifically addressing issue #104. The changes include fixing a non-existent reference and a typo in the Library size comparison section of the "comparison.md" document. This section provides guidance for selecting a library based on factors like library size, unified authentication, and compatibility with various Databricks warehouses and SQL Python APIs. The updates clarify the required dependency size for simple applications and scripts, and offer more detailed information about each library option. We've also added a new subsection titled Detailed comparison to provide a more comprehensive overview of each library's features. These changes are intended to help software engineers better understand which library is best suited for their specific needs, particularly for applications that require data transfer of large amounts of data serialized in Apache Arrow format and low result fetching latency, where we recommend using the Databricks SQL Connector for Python for efficient data transfer and low latency.
  • Fixed parsing message (#146). In this release, the warning message logged during the creation of a dashboard when a ParseError occurs has been updated to provide clearer and more detailed information about the parsing error. The new error message now includes the specific query being parsed and the exact parsing error, enabling developers to quickly identify the cause of parsing issues. This change ensures that engineers can efficiently diagnose and address parsing errors, improving the overall development and debugging experience with a more informative log format: "Parsing {query}: {error}".
  • Improve dashboard as code (#108). The Dashboards class in the 'dashboards.py' file has been updated to improve functionality and usability, with changes such as the addition of a type variable T for type checking and more descriptive names for methods. The save_to_folder method now accepts a Dashboard object and returns a Dashboard object, and a new static method create_dashboard has been added. Additionally, two new methods _with_better_names and _replace_names have been added for improved readability. The get_dashboard method now returns a Dashboard object instead of a dictionary. The save_to_folder method now also formats SQL code before saving it to file. These changes aim to enhance the functionality and readability of the codebase and provide more user-friendly methods for interacting with the Dashboards class. In addition to the changes in the Dashboards class, there have been updates in the organization of the project structure. The 'queries/counter.sql' file has been moved to 'dashboards/one_counter/counter.sql' in the 'tests/integration' directory. This modification enhances the organization of the project. Furthermore, several tests for the Dashboards class have been introduced in the 'databricks.labs.lsql.dashboards' module, demonstrating various functionalities of the class and ensuring that it functions as intended. The tests cover saving SQL and YML files to a specified folder, creating a dataset and a counter widget for each query, deploying dashboards with a given display name or dashboard ID, and testing the behavior of the save_to_folder and deploy_dashboard methods. Lastly, the commit removes the test_load_dashboard function and updates the test_dashboard_creates_one_dataset_per_query and test_dashboard_creates_one_counter_widget_per_query functions to use the updated Dashboard class. A new replace_recursively function is introduced to replace specific fields in a dataclass recursively. A new test function test_dashboards_deploys_exported_dashboard_definition has been added, which reads a dashboard definition from a JSON file, deploys it, and checks if it's successfully deployed using the Dashboards class. A new test function test_dashboard_deploys_dashboard_the_same_as_created_dashboard has also been added, which compares the original and deployed dashboards to ensure they are identical. Overall, these changes aim to improve the functionality and readability of the codebase and provide more user-friendly methods for interacting with the Dashboards class, as well as enhance the organization of the project structure and add new tests for the Dashboards class to ensure it functions as intended.
  • Infer fields from a query (#111). The Dashboards class in the dashboards.py file has been updated with the addition of a new method, _get_fields, which accepts a SQL query as input and returns a list of Field objects using the sqlglot library to parse the query and extract the necessary information. The create_dashboard method has been modified to call this new function when creating Query objects for each dataset. If a ParseError occurs, a warning is logged and iteration continues. This allows for the automatic population of fields when creating a new dashboard, eliminating the need for manual specification. Additionally, new tests have been added for invalid queries and for checking if the fields in a query have the expected names. These tests include test_dashboards_skips_invalid_query and test_dashboards_gets_fields_with_expected_names, which utilize the caplog fixture and create temporary query files to verify functionality. Existing functionality related to creating dashboards remains unchanged.
  • Make constant all caps (#140). In this release, the project's 'dashboards.py' file has been updated to improve code readability and maintainability. A constant variable _maximum_dashboard_width has been changed to all caps, becoming '_MAXIMUM_DASHBOARD_WIDTH'. This modification affects the Dashboards class and its methods, particularly _get_fields and '_get_position'. The _get_position method has been revised to use the new all caps constant variable. This change ensures better visibility of constants within the code, addressing issue #140. It's important to note that this modification only impacts the 'dashboards.py' file and does not affect any other functionalities.
  • Read display name from dashboard.yml (#144). In this release, we have introduced a new DashboardMetadata dataclass that reads the display name of a dashboard from a dashboard.yml file located in the dashboard's directory. If the dashboard.yml file is absent, the folder name will be used as the display name. This change improves the readability and maintainability of the dashboard configuration by explicitly defining the display name and reducing the need to specify widget information in multiple places. We have also added a new fixture called make_dashboard for creating and cleaning up lakeview dashboards in the test suite. The fixture handles creation and deletion of the dashboard and provides an option to set a custom display name. Additionally, we have added and modified several unit tests to ensure the proper handling of the DashboardMetadata class and the dashboard creation process, including tests for missing, present, or incorrect display_name keys in the YAML file. The dashboards.deploy_dashboard() function has been updated to handle cases where only dashboard_id is provided.
  • Set widget id in query header (#154). In this release, we've made significant improvements to widget metadata handling in our open-source library. We've introduced a new WidgetMetadata class that replaces the previous WidgetMetadata dataclass, now featuring a path attribute, spec_type property, and optional parameters for order, width, height, and _id. The _get_widgets method has been updated to accept an Iterable of WidgetMetadata objects, and both _get_layouts and _get_widgets methods now sort widgets using the order field. A new class method, WidgetMetadata.from_path, handles parsing widget metadata from a file path, replacing the removed _get_width_and_height method. Additionally, the WidgetMetadata class is now used in the deploy_dashboard method, and the test suite for the dashboards module has been enhanced with updated test_widget_metadata_replaces_width_and_height and test_widget_metadata_replaces_attribute functions, as well as new tests for specific scenarios. Issue #154 has been addressed by setting the widget id in the query header, and the aforementioned changes improve flexibility and ease of use for dashboard development.
  • Use order key in query header if defined (#149). In this release, we've introduced a new feature to use an order key in the query header if defined, enhancing the flexibility and control over the dashboard creation process. The WidgetMetadata dataclass now includes an optional order parameter of type int, and the _get_arguments_parser() method accepts the --order flag with type int. The replace_from_arguments() method has been updated to support the new order parameter, with a default value of self.order. The create_dashboard() method now implements a new _get_datasets() method to retrieve datasets from the dashboard folder and introduces a _get_widgets() method, which accepts a list of files, iterates over them, and yields tuples containing widgets and their corresponding metadata, including the order. These improvements enable the use of an order key in query headers, ensuring the correct order of widgets in the dashboard creation process. Additionally, a new test case has been added to verify the correct behavior of the dashboard deployment with a specified order key in the query header. This feature resolves issue #148.
  • Use widget width and height defined in query header (#147). In this release, the handling of metadata in SQL files has been updated to utilize the header of the file, instead of the first line, for improved readability and flexibility. This change includes a new WidgetMetadata class for defining the width and height of a widget in a dashboard, as well as new methods for parsing the widget metadata from a provided path. The release also includes updates to the documentation to cover the supported widget arguments -w or --width and '-h or --height', and resolves issue #114 by adding a test for deploying a dashboard with a big widget using a new function test_dashboard_deploys_dashboard_with_big_widget. Additionally, new test cases have been added for creating dashboards with custom-sized widgets based on query header width and height values, improving functionality and error handling.

Dependency updates:

  • Bump actions/checkout from 4.1.3 to 4.1.6 (#102).
  • Bump actions/checkout from 4.1.6 to 4.1.7 (#151).

... (truncated)

Commits

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Updates the requirements on [databricks-labs-lsql](https://github.com/databrickslabs/lsql) to permit the latest version.
- [Release notes](https://github.com/databrickslabs/lsql/releases)
- [Changelog](https://github.com/databrickslabs/lsql/blob/main/CHANGELOG.md)
- [Commits](databrickslabs/lsql@v0.5.0...v0.7.1)

---
updated-dependencies:
- dependency-name: databricks-labs-lsql
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot requested review from a team and larsgeorge-db July 16, 2024 14:02
@dependabot dependabot bot added dependencies python Pull requests that update Python code labels Jul 16, 2024
@JCZuurmond JCZuurmond enabled auto-merge July 16, 2024 14:02
@nfx nfx disabled auto-merge July 16, 2024 14:10
@nfx nfx merged commit 9135a87 into main Jul 16, 2024
3 of 4 checks passed
@nfx nfx deleted the dependabot/pip/databricks-labs-lsql-gte-0.5-and-lt-0.8 branch July 16, 2024 14:10
nfx added a commit that referenced this pull request Jul 19, 2024
* Added `lsql` lakeview dashboard-as-code implementation ([#1920](#1920)). The open-source library has been updated with new features in its dashboard creation functionality. The `assessment_report` and `estimates_report` jobs, along with their corresponding tasks, have been removed. The `crawl_groups` task has been modified to accept a new parameter, `group_manager`. These changes are part of a larger implementation of the `lsql` Lakeview dashboard-as-code system for creating dashboards. The new implementation has been tested through manual testing, existing unit tests, integration tests, and verification on a staging environment, and is expected to improve the functionality and maintainability of the dashboards. The removal of the `assessment_report` and `estimates_report` jobs and tasks may indicate that their functionality has been incorporated into the new `lsql` implementation or is no longer necessary. The new `crawl_groups` task parameter may be used in conjunction with the new `lsql` implementation to enhance the assessment and estimation of groups.
* Added new widget to get table count ([#2202](#2202)). A new widget has been introduced that presents a table count summary, categorized by type (external or managed), location (DBFS root, mount, cloud), and format (delta, parquet, etc.). This enhancement is complemented by an additional SQL file, responsible for generating necessary count statistics. The script discerns the table type and location through location string analysis and subsequent categorization. The output is structured and ordered by table type. It's important to note that no existing functionality has been altered, and the new feature is self-contained within the added SQL file. To ensure the correct functioning of this addition, relevant documentation and manual tests have been incorporated.
* Added support for DBFS when building the dependency graph for tasks ([#2199](#2199)). In this update, we have added support for the Databricks File System (DBFS) when building the dependency graph for tasks during workflow assessment. This enhancement allows for the use of wheels, eggs, requirements.txt files, and PySpark jobs located in DBFS when assessing workflows. The `DependencyGraph` object's `register_library` method has been updated to handle paths in both Workspace and DBFS formats. Additionally, we have introduced the `_as_path` method and the `_temporary_copy` context manager to manage file copying and path determination. This development resolves issue [#1558](#1558) and includes modifications to the existing `assessment` workflow and new unit tests.
* Applied `databricks labs lsql fmt` for SQL files ([#2184](#2184)). The engineering team has developed and applied formatting to several SQL files using the `databricks labs lsql fmt` tool from various pull requests, including <databrickslabs/lsql#221>. These changes improve code readability and consistency without affecting functionality. The formatting includes adding comment delimiters, converting subqueries to nested SELECT statements, renaming columns for clarity, updating comments, modifying conditional statements, and improving indentation. The impacted SQL files include queries related to data migration complexity, assessing data modeling complexity, generating table estimates, and calculating data migration effort. Manual testing has been performed to ensure that the update does not introduce any issues in the installed dashboards.
* Bump sigstore/gh-action-sigstore-python from 2.1.1 to 3.0.0 ([#2182](#2182)). In this release, the version of `sigstore/gh-action-sigstore-python` is bumped to 3.0.0 from 2.1.1 in the project's GitHub Actions workflow. This new version brings several changes, additions, and removals, such as the removal of certain settings like `fulcio-url`, `rekor-url`, `ctfe`, and `rekor-root-pubkey`, and output settings like `signature`, `certificate`, and `bundle`. The `inputs` field is now parsed according to POSIX shell lexing rules and is optional if `release-signing-artifacts` is true and the action's event is a `release` event. The default suffix has changed from `.sigstore` to `.sigstore.json`. Additionally, various deprecations present in `sigstore-python`'s 2.x series have been resolved. This PR also includes several commits, including preparing for version 3.0.0, cleaning up workflows, and removing old output settings. There are no conflicts with this PR, and Dependabot will resolve them automatically. Users can trigger Dependabot actions by commenting on this PR with specific commands.
* Consistently cleanup linter codes ([#2194](#2194)). This commit introduces changes to the linting functionality of PySpark, focusing on enhancing code consistency and accuracy. New checks have been added for detecting code incompatibilities with UC Shared Clusters, targeting Python UDF unsupported eval types, spark.catalog.X APIs on DBR versions earlier than 14.3, and the use of commandContext. A new file, python-udfs_14_3.py, containing tests for these incompatibilities has been added. The commit also resolves false linting advice for homonymous method names and updates the code for static analysis message codes, improving self-documentation and maintainability. These changes are limited to the linting functionality of PySpark and do not affect any other functionalities. Co-authored by Eric Vergnaud and Serge Smertin.
* Disable the builtin pip version check when running pip commands ([#2214](#2214)). In this release, we have introduced a modification to disable the built-in pip version check when using pip to install dependencies. This change involves altering the existing workflow of the `_install_pip` method to include the `--disable-pip-version-check` flag in the pip install command, reducing noise in pip-related errors and messages, and enhancing user experience. We have conducted manual and unit testing to ensure that the changes do not introduce any regressions and that existing functionalities remain unaffected. The error message has been updated to reflect the new pip behavior, including the `--disable-pip-version-check` flag in the message. Overall, these changes improve the user experience by reducing unnecessary error messages and providing clearer error information.
* Document `principal-prefix-access` for azure will only list abfss storage accounts ([#2212](#2212)). In this release, we have updated the documentation for the `principal-prefix-access` CLI command in the context of Azure. This command now exclusively lists Azure Storage Blob Gen2 accounts and disregards unsupported storage formats such as wasb:// or adl://. This change is significant as these unsupported storage formats are not compatible with Unity Catalog (UC) and will be disregarded during the migration process. This update clarifies the behavior of the command, ensuring that only relevant storage accounts are displayed. This modification is crucial for users who are migrating credentials to UC, as it prevents the incorporation of unsupported storage accounts, resulting in a more streamlined and efficient migration process.
* Group migration: change error logging format ([#2215](#2215)). In this release, we have updated the error logging format for failed permissions migrations during the experimental group migration workflow to enhance readability and debugging capabilities. Previously, the logs only stated that a migration failure occurred without further details. Now, the new format includes both the source and destination account names, as well as a description of the simulated failure during the migration process. This improves the transparency and usefulness of the error logs for debugging and troubleshooting purposes. Additionally, we have added unit tests to ensure the proper logging of failed migrations, ensuring the reliability of the group migration process for our users. This update demonstrates our commitment to providing clear and informative error messages to make the software engineering experience better.
* Improve error handling as already exists error occurs ([#2077](#2077)). The recent change enhances error handling for the `create-catalogs-schemas` CLI command, addressing an issue where the command would fail if the catalog or schema already existed. The modification involves the introduction of the `_get_missing_catalogs_schemas` method to avoid recreating existing ones. The `create_all_catalogs_schemas` method has been updated to include try-except blocks for `_create_catalog_validate` and `_create_schema` methods, skipping creation if a `BadRequest` error occurs with the message "already exists." This ensures that no overwriting of existing catalogs and schemas takes place. A new test case, "test_create_catalogs_schemas_handles_existing," has been added to verify the command's handling of existing catalogs and schemas. This change resolves issue [#1939](#1939) and is manually tested; no new methods were added, and existing functionality was changed only within the test file.
* Support run assessment as a collection ([#1925](#1925)). This commit introduces the capability to run eligible CLI commands as a collection, with an initial implementation for the assessment run command. A new parameter `collection_workspace_id` has been added to determine whether the current installation workflow is run or if an account context is created to iterate through all workspaces of the specified collection and run the assessment workflow. The `join_collection` method has been updated to accept a list of workspace IDs and a boolean value. Unit tests have been added and existing tests have been updated to ensure proper functionality. The `databricks labs ucx` command has also been modified to support this feature, with the `join_collection` method syncing workspaces in the collection when the `sync` flag is set to True.
* Test UCX over Python v3.10, v3.11, and v3.12 ([#2195](#2195)). In this release, we introduce significant enhancements to our GitHub Actions CI workflow, enabling more comprehensive testing of UCX over Python versions 3.10, 3.11, and 3.12. We've implemented a new matrix strategy in the `push.yml` workflow file, dynamically setting the `python-version` using the `${{ matrix.pyVersion }}` variable. This allows developers to test UCX with specific Python versions by setting the `HATCH_PYTHON` variable. Additionally, we've updated the `pyproject.toml` file, removing the Python 3.10 requirement and improving virtual environment integration with popular IDEs. The `test_migrator_supported_language_with_fixer` function in `test_files.py` has been refactored for a more efficient 'migrator.apply' method test using temporary directories and files. This release aims to ensure compatibility, identify version-specific issues, and improve the user experience for developers.
* Updated databricks-labs-blueprint requirement from ~=0.7.0 to >=0.7,<0.9 ([#2191](#2191)). In this pull request, the `databricks-labs-blueprint` package requirement has been updated from version `~=0.7.0` to `>=0.8,<0.9`. This update ensures compatibility with the project's requirements while allowing the use of the latest version of the package. The pull request also includes release notes and changelog information from the `databrickslabs/blueprint` repository, detailing various improvements and bug fixes, such as support for Python 3.12, type annotations for path-related unit tests, and fixes for the `WorkspacePath` class. A list of commits and their corresponding hashes is provided for engineers to review the changes made in the update and ensure compatibility with their projects.
* Updated databricks-labs-lsql requirement from <0.7,>=0.5 to >=0.5,<0.8 ([#2189](#2189)). In this update, the version requirement of the `databricks-labs-lsql` dependency has been updated from `<0.7,>=0.5` to `>=0.5,<0.8`. This change allows for the use of the latest version of the `databricks-labs-lsql` package while ensuring compatibility with the current system. Additionally, this commit includes the release notes, changelog, and commit details from the `databricks-labs-lsql` repository for version 0.7.1. These documents provide information on various bug fixes, improvements, and changes, such as updating the `sigstore/gh-action-sigstore-python` package from 2.1.1 to 3.0.0, using a default factory to create `Tile._position`, and other enhancements. The changelog includes detailed information about releases and features, while the commit details highlight the changes and contributors for each individual commit.
* Updated sqlglot requirement from <25.6,>=25.5.0 to >=25.5.0,<25.7 ([#2211](#2211)). In this update, we have revised the requirement range for the `sqlglot` library to '>=25.5.0,<25.7' from '<25.6,>=25.5.0'. This modification allows us to utilize the latest version of sqlglot, which is v25.6.0, while ensuring that the version does not surpass 25.7. This change is part of issue [#2211](#2211), and the new version includes several enhancements such as support for ORDER BY ALL, FROM ROWS FROM (...) in PostgreSQL, and exp.TimestampAdd in Presto and Trino. Furthermore, the update encompasses modifications to the bigquery, clickhouse, and duckdb dialects, as well as several bug fixes. These improvements are aimed at increasing functionality, stability, and addressing issues in the library.
* Yield `DependencyProblem` if job on runtime DBR14+ and using .egg dependency ([#2020](#2020)). In this release, we have introduced a new method, `_register_egg`, to handle the registration of libraries in .egg format in the `build_dependency_graph` method. This method checks the runtime version of Databricks. If the version is DBR14 or higher, it yields `DependencyProblem` with code 'not-supported', indicating that installing eggs is no longer supported in Databricks 14.0 or higher. For lower runtime versions, the method downloads the .egg file from the workspace, writes it to a temporary directory, and then registers the library with the `DependencyGraph`. The existing functionality, such as registering libraries in .whl format and registering notebooks, remains unchanged. This release also includes a new test case, `test_job_dependency_problem_egg_dbr14plus`, which creates a job with an .egg dependency and verifies that the expected `DependencyProblem` is raised when using .egg dependencies in a job on Databricks Runtime (DBR) version 14 or higher. This change addresses issue [#1793](#1793) and improves dependency management, making it easier for software engineers to adopt and work seamlessly with the project.

Dependency updates:

 * Bump sigstore/gh-action-sigstore-python from 2.1.1 to 3.0.0 ([#2182](#2182)).
 * Updated databricks-labs-lsql requirement from <0.7,>=0.5 to >=0.5,<0.8 ([#2189](#2189)).
 * Updated databricks-labs-blueprint requirement from ~=0.7.0 to >=0.7,<0.9 ([#2191](#2191)).
 * Updated sqlglot requirement from <25.6,>=25.5.0 to >=25.5.0,<25.7 ([#2211](#2211)).
@nfx nfx mentioned this pull request Jul 19, 2024
nfx added a commit that referenced this pull request Jul 19, 2024
* Added `lsql` lakeview dashboard-as-code implementation
([#1920](#1920)). The
open-source library has been updated with new features in its dashboard
creation functionality. The `assessment_report` and `estimates_report`
jobs, along with their corresponding tasks, have been removed. The
`crawl_groups` task has been modified to accept a new parameter,
`group_manager`. These changes are part of a larger implementation of
the `lsql` Lakeview dashboard-as-code system for creating dashboards.
The new implementation has been tested through manual testing, existing
unit tests, integration tests, and verification on a staging
environment, and is expected to improve the functionality and
maintainability of the dashboards. The removal of the
`assessment_report` and `estimates_report` jobs and tasks may indicate
that their functionality has been incorporated into the new `lsql`
implementation or is no longer necessary. The new `crawl_groups` task
parameter may be used in conjunction with the new `lsql` implementation
to enhance the assessment and estimation of groups.
* Added new widget to get table count
([#2202](#2202)). A new
widget has been introduced that presents a table count summary,
categorized by type (external or managed), location (DBFS root, mount,
cloud), and format (delta, parquet, etc.). This enhancement is
complemented by an additional SQL file, responsible for generating
necessary count statistics. The script discerns the table type and
location through location string analysis and subsequent categorization.
The output is structured and ordered by table type. It's important to
note that no existing functionality has been altered, and the new
feature is self-contained within the added SQL file. To ensure the
correct functioning of this addition, relevant documentation and manual
tests have been incorporated.
* Added support for DBFS when building the dependency graph for tasks
([#2199](#2199)). In this
update, we have added support for the Databricks File System (DBFS) when
building the dependency graph for tasks during workflow assessment. This
enhancement allows for the use of wheels, eggs, requirements.txt files,
and PySpark jobs located in DBFS when assessing workflows. The
`DependencyGraph` object's `register_library` method has been updated to
handle paths in both Workspace and DBFS formats. Additionally, we have
introduced the `_as_path` method and the `_temporary_copy` context
manager to manage file copying and path determination. This development
resolves issue
[#1558](#1558) and includes
modifications to the existing `assessment` workflow and new unit tests.
* Applied `databricks labs lsql fmt` for SQL files
([#2184](#2184)). The
engineering team has developed and applied formatting to several SQL
files using the `databricks labs lsql fmt` tool from various pull
requests, including <databrickslabs/lsql#221>.
These changes improve code readability and consistency without affecting
functionality. The formatting includes adding comment delimiters,
converting subqueries to nested SELECT statements, renaming columns for
clarity, updating comments, modifying conditional statements, and
improving indentation. The impacted SQL files include queries related to
data migration complexity, assessing data modeling complexity,
generating table estimates, and calculating data migration effort.
Manual testing has been performed to ensure that the update does not
introduce any issues in the installed dashboards.
* Bump sigstore/gh-action-sigstore-python from 2.1.1 to 3.0.0
([#2182](#2182)). In this
release, the version of `sigstore/gh-action-sigstore-python` is bumped
to 3.0.0 from 2.1.1 in the project's GitHub Actions workflow. This new
version brings several changes, additions, and removals, such as the
removal of certain settings like `fulcio-url`, `rekor-url`, `ctfe`, and
`rekor-root-pubkey`, and output settings like `signature`,
`certificate`, and `bundle`. The `inputs` field is now parsed according
to POSIX shell lexing rules and is optional if
`release-signing-artifacts` is true and the action's event is a
`release` event. The default suffix has changed from `.sigstore` to
`.sigstore.json`. Additionally, various deprecations present in
`sigstore-python`'s 2.x series have been resolved. This PR also includes
several commits, including preparing for version 3.0.0, cleaning up
workflows, and removing old output settings. There are no conflicts with
this PR, and Dependabot will resolve them automatically. Users can
trigger Dependabot actions by commenting on this PR with specific
commands.
* Consistently cleanup linter codes
([#2194](#2194)). This
commit introduces changes to the linting functionality of PySpark,
focusing on enhancing code consistency and accuracy. New checks have
been added for detecting code incompatibilities with UC Shared Clusters,
targeting Python UDF unsupported eval types, spark.catalog.X APIs on DBR
versions earlier than 14.3, and the use of commandContext. A new file,
python-udfs_14_3.py, containing tests for these incompatibilities has
been added. The commit also resolves false linting advice for homonymous
method names and updates the code for static analysis message codes,
improving self-documentation and maintainability. These changes are
limited to the linting functionality of PySpark and do not affect any
other functionalities. Co-authored by Eric Vergnaud and Serge Smertin.
* Disable the builtin pip version check when running pip commands
([#2214](#2214)). In this
release, we have introduced a modification to disable the built-in pip
version check when using pip to install dependencies. This change
involves altering the existing workflow of the `_install_pip` method to
include the `--disable-pip-version-check` flag in the pip install
command, reducing noise in pip-related errors and messages, and
enhancing user experience. We have conducted manual and unit testing to
ensure that the changes do not introduce any regressions and that
existing functionalities remain unaffected. The error message has been
updated to reflect the new pip behavior, including the
`--disable-pip-version-check` flag in the message. Overall, these
changes improve the user experience by reducing unnecessary error
messages and providing clearer error information.
* Document `principal-prefix-access` for azure will only list abfss
storage accounts
([#2212](#2212)). In this
release, we have updated the documentation for the
`principal-prefix-access` CLI command in the context of Azure. This
command now exclusively lists Azure Storage Blob Gen2 accounts and
disregards unsupported storage formats such as wasb:// or adl://. This
change is significant as these unsupported storage formats are not
compatible with Unity Catalog (UC) and will be disregarded during the
migration process. This update clarifies the behavior of the command,
ensuring that only relevant storage accounts are displayed. This
modification is crucial for users who are migrating credentials to UC,
as it prevents the incorporation of unsupported storage accounts,
resulting in a more streamlined and efficient migration process.
* Group migration: change error logging format
([#2215](#2215)). In this
release, we have updated the error logging format for failed permissions
migrations during the experimental group migration workflow to enhance
readability and debugging capabilities. Previously, the logs only stated
that a migration failure occurred without further details. Now, the new
format includes both the source and destination account names, as well
as a description of the simulated failure during the migration process.
This improves the transparency and usefulness of the error logs for
debugging and troubleshooting purposes. Additionally, we have added unit
tests to ensure the proper logging of failed migrations, ensuring the
reliability of the group migration process for our users. This update
demonstrates our commitment to providing clear and informative error
messages to make the software engineering experience better.
* Improve error handling as already exists error occurs
([#2077](#2077)). The recent
change enhances error handling for the `create-catalogs-schemas` CLI
command, addressing an issue where the command would fail if the catalog
or schema already existed. The modification involves the introduction of
the `_get_missing_catalogs_schemas` method to avoid recreating existing
ones. The `create_all_catalogs_schemas` method has been updated to
include try-except blocks for `_create_catalog_validate` and
`_create_schema` methods, skipping creation if a `BadRequest` error
occurs with the message "already exists." This ensures that no
overwriting of existing catalogs and schemas takes place. A new test
case, "test_create_catalogs_schemas_handles_existing," has been added to
verify the command's handling of existing catalogs and schemas. This
change resolves issue
[#1939](#1939) and is
manually tested; no new methods were added, and existing functionality
was changed only within the test file.
* Support run assessment as a collection
([#1925](#1925)). This
commit introduces the capability to run eligible CLI commands as a
collection, with an initial implementation for the assessment run
command. A new parameter `collection_workspace_id` has been added to
determine whether the current installation workflow is run or if an
account context is created to iterate through all workspaces of the
specified collection and run the assessment workflow. The
`join_collection` method has been updated to accept a list of workspace
IDs and a boolean value. Unit tests have been added and existing tests
have been updated to ensure proper functionality. The `databricks labs
ucx` command has also been modified to support this feature, with the
`join_collection` method syncing workspaces in the collection when the
`sync` flag is set to True.
* Test UCX over Python v3.10, v3.11, and v3.12
([#2195](#2195)). In this
release, we introduce significant enhancements to our GitHub Actions CI
workflow, enabling more comprehensive testing of UCX over Python
versions 3.10, 3.11, and 3.12. We've implemented a new matrix strategy
in the `push.yml` workflow file, dynamically setting the
`python-version` using the `${{ matrix.pyVersion }}` variable. This
allows developers to test UCX with specific Python versions by setting
the `HATCH_PYTHON` variable. Additionally, we've updated the
`pyproject.toml` file, removing the Python 3.10 requirement and
improving virtual environment integration with popular IDEs. The
`test_migrator_supported_language_with_fixer` function in
`test_files.py` has been refactored for a more efficient
'migrator.apply' method test using temporary directories and files. This
release aims to ensure compatibility, identify version-specific issues,
and improve the user experience for developers.
* Updated databricks-labs-blueprint requirement from ~=0.7.0 to
>=0.7,<0.9 ([#2191](#2191)).
In this pull request, the `databricks-labs-blueprint` package
requirement has been updated from version `~=0.7.0` to `>=0.8,<0.9`.
This update ensures compatibility with the project's requirements while
allowing the use of the latest version of the package. The pull request
also includes release notes and changelog information from the
`databrickslabs/blueprint` repository, detailing various improvements
and bug fixes, such as support for Python 3.12, type annotations for
path-related unit tests, and fixes for the `WorkspacePath` class. A list
of commits and their corresponding hashes is provided for engineers to
review the changes made in the update and ensure compatibility with
their projects.
* Updated databricks-labs-lsql requirement from <0.7,>=0.5 to >=0.5,<0.8
([#2189](#2189)). In this
update, the version requirement of the `databricks-labs-lsql` dependency
has been updated from `<0.7,>=0.5` to `>=0.5,<0.8`. This change allows
for the use of the latest version of the `databricks-labs-lsql` package
while ensuring compatibility with the current system. Additionally, this
commit includes the release notes, changelog, and commit details from
the `databricks-labs-lsql` repository for version 0.7.1. These documents
provide information on various bug fixes, improvements, and changes,
such as updating the `sigstore/gh-action-sigstore-python` package from
2.1.1 to 3.0.0, using a default factory to create `Tile._position`, and
other enhancements. The changelog includes detailed information about
releases and features, while the commit details highlight the changes
and contributors for each individual commit.
* Updated sqlglot requirement from <25.6,>=25.5.0 to >=25.5.0,<25.7
([#2211](#2211)). In this
update, we have revised the requirement range for the `sqlglot` library
to '>=25.5.0,<25.7' from '<25.6,>=25.5.0'. This modification allows us
to utilize the latest version of sqlglot, which is v25.6.0, while
ensuring that the version does not surpass 25.7. This change is part of
issue [#2211](#2211), and
the new version includes several enhancements such as support for ORDER
BY ALL, FROM ROWS FROM (...) in PostgreSQL, and exp.TimestampAdd in
Presto and Trino. Furthermore, the update encompasses modifications to
the bigquery, clickhouse, and duckdb dialects, as well as several bug
fixes. These improvements are aimed at increasing functionality,
stability, and addressing issues in the library.
* Yield `DependencyProblem` if job on runtime DBR14+ and using .egg
dependency ([#2020](#2020)).
In this release, we have introduced a new method, `_register_egg`, to
handle the registration of libraries in .egg format in the
`build_dependency_graph` method. This method checks the runtime version
of Databricks. If the version is DBR14 or higher, it yields
`DependencyProblem` with code 'not-supported', indicating that
installing eggs is no longer supported in Databricks 14.0 or higher. For
lower runtime versions, the method downloads the .egg file from the
workspace, writes it to a temporary directory, and then registers the
library with the `DependencyGraph`. The existing functionality, such as
registering libraries in .whl format and registering notebooks, remains
unchanged. This release also includes a new test case,
`test_job_dependency_problem_egg_dbr14plus`, which creates a job with an
.egg dependency and verifies that the expected `DependencyProblem` is
raised when using .egg dependencies in a job on Databricks Runtime (DBR)
version 14 or higher. This change addresses issue
[#1793](#1793) and improves
dependency management, making it easier for software engineers to adopt
and work seamlessly with the project.

Dependency updates:

* Bump sigstore/gh-action-sigstore-python from 2.1.1 to 3.0.0
([#2182](#2182)).
* Updated databricks-labs-lsql requirement from <0.7,>=0.5 to >=0.5,<0.8
([#2189](#2189)).
* Updated databricks-labs-blueprint requirement from ~=0.7.0 to
>=0.7,<0.9 ([#2191](#2191)).
* Updated sqlglot requirement from <25.6,>=25.5.0 to >=25.5.0,<25.7
([#2211](#2211)).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
python Pull requests that update Python code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant