-
Notifications
You must be signed in to change notification settings - Fork 4
Initial read-only Snowflake Catalog implementation by @sfc-gh-mparmar #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial read-only Snowflake Catalog implementation by @sfc-gh-mparmar #1
Conversation
snowflake/src/main/java/org/apache/iceberg/snowflake/SnowflakeCatalog.java
Show resolved
Hide resolved
| this.catalogName = name; | ||
| } | ||
|
|
||
| LOG.debug("Connecting to JDBC database {}", properties.get(CatalogProperties.URI)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to check on best practices for scrubbing sensitive secrets from log statements in case the URI is used with inline user/password strings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where do these logs go? Should we even be logging?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed
snowflake/src/main/java/org/apache/iceberg/snowflake/SnowflakeQueryFactory.java
Outdated
Show resolved
Hide resolved
| } | ||
|
|
||
| public static String getMetadataLocationFromJson(String json) { | ||
| return JsonUtil.parse(json, SnowflakeTableMetadata::getMetadataLocationFromJson); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of parsing separately each time we should probably just parse it initially and fetch fields from a JsonNode object consistently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
| import org.apache.iceberg.snowflake.entities.SnowflakeSchema; | ||
| import org.apache.iceberg.snowflake.entities.SnowflakeTable; | ||
| import org.apache.iceberg.snowflake.entities.SnowflakeTableMetadata; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need javadoc comments on all the classes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added for classes that can benefit from explanation
| import org.apache.iceberg.snowflake.entities.SnowflakeTableMetadata; | ||
|
|
||
| public interface QueryFactory extends Closeable { | ||
| List<SnowflakeSchema> listSchemas(Namespace namespace); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we're going to use this as an abstraction layer allowing us to easily swap out the bottom connectivity layer into Snowflake APIs, we should make the input argument also use Snowflake concepts, so that the business logic of translating an Iceberg Namespace into a Snowflake Database/Schema is common across implementations, while the way a list of tables is fetched given a db.schema can vary based on underlying implementations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since there's actually a fair amount of JDBC-specific logic in translating the namespace into the right predicate or identifier in the SQL statement, I'll wait until we add another lower-level client impl before extracting the namespace-conversion out, and for now just focus on refacting within JdbcSnowflakeClient to reduce duplication of boilerplate.
| conn -> run.query(conn, finalQuery, SnowflakeTable.createHandler()))); | ||
| } | ||
|
|
||
| // In case of DB level namespace, filter the results to given namespace |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's unclear why this additional filter step is needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is unnecessary now. In my earlier implementation, I was listing tables at account level and then filtering out the ones not part of the specified namespace. Also need to modify the comment in line 129 to reflect it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah makes sense, thanks for the context. I can clean it up.
| import org.apache.iceberg.snowflake.entities.SnowflakeTable; | ||
| import org.apache.iceberg.snowflake.entities.SnowflakeTableMetadata; | ||
|
|
||
| public interface QueryFactory extends Closeable { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Naming-wise, I'd normally expected a QueryFactory to produce queries, rather than execute queries. Maybe better to name this class SnowflakeClient or something like that if the intent is to wrap JDBC and/or other API clients under a more constrained/usable interface.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| public List<TableIdentifier> listTables(Namespace namespace) { | ||
| Preconditions.checkArgument( | ||
| namespace.length() <= SnowflakeResources.MAX_NAMESPACE_DEPTH, | ||
| "Snowflake doesn't supports more than 2 levels of namespace"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Snowflake doesn't support more than {} namespace levels", SnowflakeResources.MAX_NAMESPACE_DEPTH
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| this.catalogName = name; | ||
| } | ||
|
|
||
| LOG.debug("Connecting to JDBC database {}", properties.get(CatalogProperties.URI)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where do these logs go? Should we even be logging?
| } | ||
|
|
||
| @Override | ||
| public void createNamespace(Namespace namespace, Map<String, String> metadata) {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this throw?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed, added throw new UnsupportedOperationException (which is in-line with the base catalog impl) here and other unimplemented methods.
| public List<Namespace> listNamespaces(Namespace namespace) { | ||
| Preconditions.checkArgument( | ||
| namespace.length() <= SnowflakeResources.MAX_NAMESPACE_DEPTH, | ||
| "Snowflake doesn't supports more than 2 levels of namespace"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please update message per the comment above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| .map(schema -> Namespace.of(schema.getDatabase(), schema.getName())) | ||
| .collect(Collectors.toList()); | ||
| } catch (UncheckedSQLException | UncheckedInterruptedException ex) { | ||
| LOG.error("{}", ex.getMessage(), ex); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What unchecked exceptions do we think we're going to get here? Do we need to catch these? What happens to the exception? We don't end up showing anything to the customer in this case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed this try/catch; at this level in the code, unchecked exceptions should probably just propagate all the way up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, thanks.
|
|
||
| } catch (SQLException e) { | ||
| LOG.error("Failed to list tables for namespace {}", namespace, e); | ||
| throw new UncheckedSQLException( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why catch and rethrow these as unchecked exceptions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did some digging, unfortunately it looks like heavily relying on RuntimeExceptions is heavily ingrained into the whole codebase. Since the interfaces don't declare throwing any particular checked exceptions, it looks like it's kind of the wild west.
I see this exact pattern of UncheckedSQLException throwing from the adjacent JdbcCatalog: https://github.com/apache/iceberg/blob/0694269cd20a3625d5587d544bec621e9bec305a/core/src/main/java/org/apache/iceberg/jdbc/JdbcCatalog.java#L483
} catch (SQLException e) {
sqlErrorHandler.accept(e);
throw new UncheckedSQLException(e, "Failed to execute: %s", sql);
} catch (InterruptedException e) {
throw new UncheckedInterruptedException(e, "Interrupted in SQL command");
}
I was wondering how GlueCatalog manages to look relatively clean, and discovered AWS SDK just leans heavily on RuntimeExceptions: https://github.com/apache/iceberg/blob/0694269cd20a3625d5587d544bec621e9bec305a/aws/src/main/java/org/apache/iceberg/aws/glue/GlueTableOperations.java#L271
That uses the GlueClient: https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/glue/GlueClient.html#getTable(java.util.function.Consumer) which lists a whole bunch of exceptions, but interestingly they all extend from RuntimeException:
https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/glue/model/OperationTimeoutException.html
https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/glue/model/GlueEncryptionException.html
So even though it looks funny, this is probably the most idiomatic way for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ugh, ok. Let's make sure that habit doesn't creep into our service code. :)
| LOG.debug("Getting metadata location for table {}", tableName()); | ||
| location = getTableMetadataLocation(); | ||
| } catch (InterruptedException e) { | ||
| Thread.currentThread().interrupt(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we doing this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this follows convention of JdbcCatalog: https://github.com/apache/iceberg/blob/0694269cd20a3625d5587d544bec621e9bec305a/core/src/main/java/org/apache/iceberg/jdbc/JdbcCatalog.java#L114
I'm also a bit fuzzy on the actual scenarios where the thread's interrupt bit becomes important when we're going to rethrow an unchecked exception anyways, but this stackoverflow thread seems to provide a good overview. I think the problem comes from the fact that ClientPoolImpl declares throwing the InterruptedException: https://github.com/apache/iceberg/blob/0694269cd20a3625d5587d544bec621e9bec305a/core/src/main/java/org/apache/iceberg/ClientPoolImpl.java#L55
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, it looks like this is self-inflicted, since we already wrap all the exceptions in the underlying JDBC client wrapper. So this TableOperations class shouldn't need to duplicate the exception wrapping. I'll clean it up.
| if (location == null || location.isEmpty()) { | ||
| if (currentMetadataLocation() != null) { | ||
| throw new NoSuchTableException( | ||
| "Failed to load table %s from catalog %s: dropped by another process", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we know for sure that it was dropped by another process? Maybe we should just say "table could not be found".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like all this is vestigial; this can't happen in the current code because the metadata object itself is null for missing tables and we throw NoSuchTableException below for that case. Changed the location check to a precondition and removed this whole conditional branch.
| } | ||
|
|
||
| @Test | ||
| public void testListNamespace() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're testing the golden path scenarios here, but I don't see us testing error and edge cases, like an empty namespace, or a non-existent database or schema, or too many namespace levels. Can we add tests that cover our error scenarios too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| namespace.level(SnowflakeResources.NAMESPACE_DB_LEVEL - 1))); | ||
| } | ||
| } catch (SQLException | InterruptedException e) { | ||
| throw new RuntimeException(e); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we throwing RuntimeExceptions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rewrote this whole class as an in-memory fake instead of sqllite fake, removed these exceptions
|
|
||
| catalog = new SnowflakeCatalog(); | ||
| JdbcClientPool connectionPool = new JdbcClientPool(uri, properties); | ||
| catalog.setQueryFactory(new TestQueryFactory(connectionPool)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the underlying JDBC here won't be exercising a real Snowflake data model anyways, IMO it's better to keep this unittest contained to the responsibilities of SnowflakeCatalog itself, which is namely simply to translate between our own in-memory data model (entities/) into Iceberg idioms. Then we don't need a fake underlying database or a fake query factory.
We could also try to unittest the query factory (which we should rename to JdbcSnowflakeClient) by fake or mock JdbcClientPools and just verify the queries generated rather than using a fake sqlite database.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| try { | ||
| Path filePath = Paths.get(getClass().getClassLoader().getResource(path).toURI()); | ||
| byte[] data = Files.readAllBytes(filePath); | ||
| return (InputFile) new InMemoryInputFile(filePath.toString(), data); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we're just going to return an InMemoryInputFile, it's probably better to just inline some minimal json data in the unittest itself to keep the test data close to the test case wherever possible. It'll be more manageable to test with multiple different table metadatas that way too without having a cramped resources/ directory for all the test data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| import org.apache.iceberg.snowflake.entities.SnowflakeTable; | ||
| import org.apache.iceberg.snowflake.entities.SnowflakeTableMetadata; | ||
|
|
||
| public class TestQueryFactory implements QueryFactory { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally the naming convention I see elsewhere in Iceberg is TestFoo is a file containing test cases for Foo.
I'd rename these things to "FakeFoo" - typically tests use Mocks (for micromanaged interactions) or Fakes (for in-memory "fake" implementations) as the usual naming convention.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
-Make InMemoryFileIO fully deal with in-memory contents; generate the fake metadata directly in SnowflakeCatalogTest instead of having a test json file under resources/ -Rename *QueryFactory to *SnowflakeClient -Eliminate redundant/verbose json parsing in SnowflakeTableMetadata -Implement the FakeSnowflakeClient as nested TreeMaps; populate it directly in SnowflakeCatalogTest and remove the usage of sqlite from the unittest.
-Make InMemoryFileIO fully deal with in-memory contents; generate the fake metadata directly in SnowflakeCatalogTest instead of having a test json file under resources/ -Rename *QueryFactory to *SnowflakeClient -Eliminate redundant/verbose json parsing in SnowflakeTableMetadata -Implement the FakeSnowflakeClient as nested TreeMaps; populate it directly in SnowflakeCatalogTest and remove the usage of sqlite from the unittest.
…eption logic in SnowflakeTableOperations.
…s/iceberg into snowflake-catalog-readonly
connection pool and prepare for better testability. Clean up error handling.
…when given an already fully-qualified two-level namespace, remove the use of that existence check in loadNamespaceMetadata for now. Per Iceberg spec, it's also "optional" to throw a NoSuchNamespaceException for non-existent namespaces.
|
@sfc-gh-ppaulus I'll still need to work on adding additional test cases, but all the major refactor suggestions from the first round of reviews is done, PTAL |
The changes look reasonable. Does this include the changes to support key files? |
Not yet, planning to take some time to figure out how to test keyfile auth today. |
|
@sfc-gh-ppaulus Good news - though I had assumed the worst for keyfile support due to how other libraries implemented it, I managed to confirm that the current code already passes through sufficient properties to the JDBC driver such that no new code is needed at all to fully support all the keyfile passthrough options. I successfully tested end-to-end with:
There are some quirks to how password-encrypted keyfiles need security-provider setup, but that part is non-specific to this Iceberg library, and is the same for all direct users of the Snowflake JDBC driver as well. So the only additional changes will be expanding unittests and doing some path-translation for Azure/GCS. |
-Add path-syntax translation between Snowflake and Iceberg paths -Add unittests for error cases and path translation for Azure and GCS
sfc-gh-ppaulus
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Dennis
| // regardless of which classloader ends up using this JdbcSnowflakeClient, but we'll only | ||
| // warn if the expected driver fails to load, since users may use repackaged or custom | ||
| // JDBC drivers for Snowflake communcation. | ||
| Class.forName(JdbcSnowflakeClient.EXPECTED_JDBC_IMPL); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried the same approach in commit c37b924 but this still doesn't force the loading of the correct driver. I tested it with the EMR and with this fix it will now throw the below exception as the driver is still not loaded.
Were you able to test this fix with EMR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question - hadn't had a chance to test it yet, but I've tested it successfully now on EMR, as well as the error case.
# Works
spark-shell --packages net.snowflake:snowflake-jdbc:3.13.22 --jars iceberg-spark-runtime-3.2_2.12-instance-initialization-forname.jar --conf ...
# Error
spark-shell --packages net.snowflake:snowflake-jdbc:3.13.22 --jars iceberg-spark-runtime-3.2_2.12-no-forname.jar --conf ...
I also tried with it in the static block like yours and it happened to work on my EMR cluster. However, as far as I understand it, the reason it's a gamble to put a Class.forName call in a static initalization block especially when using Spark is that there's a lot of fancy dynamic jar-downloading and classloading stuff, such that I wouldn't count on the classpath resources necessarily being complete at the time the static initializer block is called.
One possibility is if you tested with the Snowflake JDBC Driver being a --jars argument instead of --packages argument since I tested it in --packages?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, we'll want the forName to happen as late as possible in the initialization code since in the future we'll be deciding between whether to load a JDBC client or a different REST client, etc.
I also made it just a "warn" instead of throw in case the user decided to do something fancy with registering a custom Snowflake driver for jdbc:snowflake://
| import org.apache.iceberg.snowflake.entities.SnowflakeTable; | ||
| import org.apache.iceberg.snowflake.entities.SnowflakeTableMetadata; | ||
|
|
||
| public class FakeSnowflakeClient implements SnowflakeClient { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reasoning behind choosing the sqlite database for testing was to achieve greater coverage for both the catalog and the client related code like the ORM layer which is specific to client implementation.
The issue I see with this approach is that it's agnostic of the client implementation (jdbc now and rest in future) where bulk of the logic resides. Let's discuss more on this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, my motivation for the refactor was specifically to make SnowflakeCatalogTest agnostic of the client implementation - that's the goal to make the unittest well-contained. We could always also add a unittest that's more of an integration test if we have a good way to mock out the lower JDBC communication, but ultimately if we supply a fake SnowflakeClient anyways that doesn't call into JdbcSnowflakeClient, that client should avoid using JDBC under the hood.
If we can inject something into a real JdbcSnowflakeClient that achieves what we want then we can inject that real JdbcSnowflakeClient into a real SnowflakeCatalog for a more end-to-end test.
Otherwise, for single-class unittests we can keep them clean and self-contained.
…flake-managed Iceberg tables (apache#6428) * Initial read-only Snowflake Catalog implementation by @sfc-gh-mparmar (#1) Initial read-only Snowflake Catalog implementation built on top of the Snowflake JDBC driver, providing support for basic listing of namespaces, listing of tables, and loading/reads of tables. Auth options are passthrough to the JDBC driver. Co-authored-by: Maninder Parmar <[email protected]> Co-authored-by: Maninder Parmar <[email protected]> Co-authored-by: Dennis Huo <[email protected]> * Add JdbcSnowflakeClientTest using mocks (#2) Add JdbcSnowflakeClientTest using mocks; provides full coverage of JdbcSnowflakeClient and entities' ResultSetHandler logic. Also update target Spark runtime versions to be included. * Add test { useJUnitPlatform() } tuple to iceberg-snowflake for consistency and future interoperability with inheriting from abstact unittest base classes. * Extract versions into versions.props per PR review * Misc test-related refactors per review suggestions -Convert unittests to all use assertj/Assertions for "fluent assertions" -Refactor test injection into overloaded initialize() method -Add test cases for close() propagation -Use CloseableGroup. * Fix unsupported behaviors of loadNamedpaceMetadata and defaultWarehouseLocation * Move TableIdentifier checks out of newTableOps into the SnowflakTableOperations class itself, add test case. * Refactor out any Namespace-related business logic from the lower SnowflakeClient/JdbcSnowflakeClient layers and merge SnowflakeTable and SnowflakeSchema into a single SnowflakeIdentifier that also encompasses ROOT and DATABASE level identifiers. A SnowflakeIdentifier thus functions like a type-checked/constrained Iceberg TableIdentifier, and eliminates any tight coupling between a SnowflakeClient and Catalog business logic. Parsing of Namespace numerical levels into a SnowflakeIdentifier is now fully encapsulated in NamespaceHelpers so that callsites don't duplicate namespace-handling/validation logic. * Finish migrating JdbcSnowflakeClientTest off any usage of org.junit.Assert in favor of assertj's Assertions. * Style refactorings from review comments, expanded and moved InMemoryFileIO into core with its own unittest. * Fix behavior of getNamespaceMetadata to throw when the namespace doesn't exist. Refactor for naming conventions and consolidating identifier handling into NamespaceHandlers. Make FileIO instantiated fresh for each newTableOps call. * Move private constructor to top, add assertion to test case. * Define minimal ResultSetParser/QueryHarness classes to fully replace any use of commons-dbutils; refactor ResultSet handling fully into JdbcSnowflakeClient.java. * Update snowflake/src/main/java/org/apache/iceberg/snowflake/SnowflakeTableOperations.java Co-authored-by: Eduard Tudenhöfner <[email protected]> * Refactor style suggestions; remove debug-level logging, arguments in exceptions, private members if not accessed outside, move precondition checks, add test for NamespaceHelpers. * Fix precondition messages, remove getConf() * Clean up varargs. * Make data members final, include rawJsonVal in toString for debuggability. * Combine some small test cases into roundtrip test cases, misc cleanup * Add comment for why a factory class is exposed for testing purposes. Co-authored-by: Dennis Huo <[email protected]> Co-authored-by: Maninder Parmar <[email protected]> Co-authored-by: Maninder Parmar <[email protected]> Co-authored-by: Eduard Tudenhöfner <[email protected]>
Initial read-only Snowflake Catalog implementation by @sfc-gh-mparmar