v0.1.1
- Added batched iteration for
INSERT INTO
queries inStatementExecutionBackend
with defaultmax_records_per_batch=1000
(#237). - Added crawler for mount points (#209).
- Added crawlers for compatibility of jobs and clusters, along with basic recommendations for external locations (#244).
- Added safe return on grants (#246).
- Added ability to specify empty group filter in the installer script (#216) (#217).
- Added ability to install application by multiple different users on the same workspace (#235).
- Added dashboard creation on installation and a requirement for
warehouse_id
in config, so that the assessment dashboards are refreshed automatically after job runs (#214). - Added reliance on rate limiting from Databricks SDK for listing workspace (#258).
- Fixed errors in corner cases where Azure Service Principal Credentials were not available in Spark context (#254).
- Fixed
DESCRIBE TABLE
throwing errors when listing Legacy Table ACLs (#238). - Fixed
file already exists
error in the installer script (#219) (#222). - Fixed
guess_external_locations
failure withAttributeError: as_dict
and added an integration test (#259). - Fixed error handling edge cases in
crawl_tables
task (#243) (#251). - Fixed
crawl_permissions
task failure on folder names containing a forward slash (#234). - Improved
README
notebook documentation (#260, #228, #252, #223, #225). - Removed redundant
.python-version
file (#221). - Removed discovery of account groups from
crawl_permissions
task (#240). - Updated databricks-sdk requirement from ~=0.8.0 to ~=0.9.0 (#245).
Kudos to @larsgeorge-db @william-conti @dmoore247 @tamilselvanveeramani @nfx @FastLee