Skip to content

Conversation

@adutra
Copy link
Contributor

@adutra adutra commented Jun 13, 2025

No description provided.

Copy link
Contributor

@eric-maynard eric-maynard Jun 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does get rid of the warnings, but aren't the warnings the main feature of the @Deprecated annotation?

ref: #1672 (comment)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The usage of the old config key is deprecated - using that should trigger a user-facing deprecation warning. The deprecation is triggering at the wrong site then.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the link I didn't know there was a previous conversation on this topic.

That said, I rather disagree: the annotation is valuable mainly for consumers of the API, and especially when it's well documented in terms of when it was deprecated, when it will be removed and how to replace it.

Here, we own both the source of the deprecation warnings (the catalogConfigUnsafe method) and the call sites where the deprecation warnings are triggered (in FeatureConfiguration class): IOW, keeping the warnings around is not useful for Polaris devs.

And there is no risk of "forgetting" to update FeatureConfiguration later on: as soon as we remove the deprecated method, the FeatureConfiguration class wouldn't compile anymore.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My take on this: we should rather remove deprecation from catalogConfigUnsafe() and file issues to remove support for those legacy properties. The method itself has a valid use case in Polaris code, which is specifically to support migration from old config names to new name, so the method itself is not conceptually "deprecated".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A io.smallrye.config.RelocateConfigSourceInterceptor implementation can be used to nag users to update the configs.

Copy link
Contributor

@eric-maynard eric-maynard Jun 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's already a log to remind users to remove the configs, so we are covered from that perspective. This was more intended to nag maintainers to remove the deprecated method at some point in time. Otherwise, why deprecate methods? Will we always add this suppression when a method is deprecated?

@github-project-automation github-project-automation bot moved this from PRs In Progress to Ready to merge in Basic Kanban Board Jun 13, 2025
@snazy
Copy link
Member

snazy commented Jun 23, 2025

Looks like the PR needs a rebase

@adutra adutra force-pushed the fix-build-warnings branch from 29ea56f to 2da5794 Compare June 23, 2025 11:01
@snazy
Copy link
Member

snazy commented Jun 23, 2025

+1

@adutra
Copy link
Contributor Author

adutra commented Jun 25, 2025

@eric-maynard is that OK to merge this?

@snazy
Copy link
Member

snazy commented Jun 26, 2025

I think this is ready to be merged.

@adutra adutra merged commit e7a009f into apache:main Jun 26, 2025
11 checks passed
@adutra adutra deleted the fix-build-warnings branch June 26, 2025 10:08
@github-project-automation github-project-automation bot moved this from Ready to merge to Done in Basic Kanban Board Jun 26, 2025
@eric-maynard
Copy link
Contributor

@adutra my question wasn't really answered -- is the strategy to always suppress all deprecation warnings?

@snazy
Copy link
Member

snazy commented Jun 27, 2025

@eric-maynard the strategy is to address deprecation coming from dependencies. If something needs to be done in the Polaris code base in the future, it's better to open an issue so it doesn't get lost.

@eric-maynard
Copy link
Contributor

That still doesn’t answer my question. Will we suppress all deprecation warnings?

@dimas-b
Copy link
Contributor

dimas-b commented Jun 27, 2025

@eric-maynard @snazy @adutra : I'd like to reopen my suggestion of removing deprecation from PolarisConfiguration.catalogConfigUnsafe()

This method is not really intended to be called by "users" of Polaris. For internal calls (or calls from downstream projects) the purpose is to provide an upgrade path for users to gradually adjust to the new config names. So, in my mind the catalogConfigUnsafe() method is not deprecated. What is deprecated in the old property name (which is its argument). The use of these deprecated properties already leads to a WARN log message (targeted at end users).

All in all, I think it is preferable to remove deprecation from catalogConfigUnsafe() and file GH issues for unsupporting those properties in 2.0. WDYT?

@eric-maynard
Copy link
Contributor

eric-maynard commented Jun 27, 2025

That makes more sense to me. If we don't want to mark it as deprecated, that is a reasonable argument, but I don't really grok why we would mark it as deprecated but then disable the warning.

My argument for leaving it as deprecated is mostly to make it clear that there should be no new use of this method. And, when the old configs are yanked, the method can accordingly be yanked as well. In that sense it is marked for deprecation/removal.

dimas-b added a commit to dimas-b/polaris that referenced this pull request Jun 27, 2025
As discussed in apache#1894, `catalogConfigUnsafe()` has a use case in the
main codebase - that is to support gradual migration to new property
names.

Using legacy names in runtime already leads to a user-face WARN log message.

Support for those legacy property names is to be dropped in 2.0.
@dimas-b
Copy link
Contributor

dimas-b commented Jun 27, 2025

Ok, I opened #1970 for my proposal. If accepted/merged, I'll follow up with an issue for dropping old config names in 2.0

snazy added a commit to snazy/polaris that referenced this pull request Nov 20, 2025
* Exclude unused dependency for polaris spark client dependency (apache#1933)

* enable ETag integration tests (apache#1935)

tests were added in 8b5dfa9 and afaict supposed to get enabled after ec97c1b

* Fix Pagination for Catalog Federation (apache#1849)

Details can be found in this issue: apache#1848

* Update doc to fix docker build inconsistency issue (apache#1946)

* Simplify install dependency doc (apache#1941)

* Simply getting start doc

* Simply install dependecy doc

* Minor words change

* Fix admin tool for quick start (apache#1945)

When attempting to use the `polaris-admin-tool.jar` to bootstrap a realm, the application fails with a `jakarta.enterprise.inject.UnsatisfiedResolutionException` because it cannot find a `javax.sql.DataSource` bean. Detail in apache#1943

This issue occurs because `quarkus.datasource.db-kind`is a build-time property in Quarkus. Its value must be defined during the application's build process to enable the datasource extension and generate the necessary CDI bean producer (ref: https://quarkus.io/guides/all-config#quarkus-datasource_quarkus-datasource-db-kind).

I think we only support postgres for now, thus, I set `quarkus.datasource.db-kind=postgresql`. This can be problematic if we later want to support more data sources other than postgres. There are couple options we have for this such as use multiple named datasources in the config during build time. But this may be out of scope of this PR. I am open for more discussion on this, but for the time being, it may be better to unblock people who are trying to use the quick start doc.

Sample output for the bootstrap container after the fix:
```
➜  polaris git:(1943) docker logs polaris-polaris-bootstrap-1
Realm 'POLARIS' successfully bootstrapped.
Bootstrap completed successfully.
```

* fix(build): Fix deprecation warnings in FeatureConfiguration (apache#1894)

* Fix NPE in listCatalogs (apache#1949)

listCatalogs is non-atomic. It first atomically lists all entities and then iterates through each one and does an individual loadEntity call. This causes an NPE when calling `CatalogEntity::new`.

I don't think it's ever useful for listCatalogsUnsafe to return null since the caller isn't expecting a certain length of elements, so I just filtered it there.

* Fix doc for sample log and default password (apache#1951)

Minor updates for the quick start doc:
1. update sample output to reflect with the latest code
2. update default password to the right value
3. remove trailing space

* Optimize the location overlap check with an index (apache#1686)

The location overlap check for "sibling" tables (those which share a parent) has been a performance bottleneck since its introduction, but we haven't historically had a good way around this other than just disabling the check. 

<hr>

### Current Behavior

The current logic is that when we create a table, we list all sibling tables and check each and every one to ensure there is no location overlap. This results in O(N^2) checks when adding N tables to a namespace, quickly becoming untenable.

With the `CreateTreeDataset` [benchmark](https://github.com/eric-maynard/polaris-tools/blob/main/benchmarks/src/gatling/scala/org/apache/polaris/benchmarks/simulations/CreateTreeDataset.scala) I tested creating 5000 sibling tables using the current code:

It is apparent that latency increases over time. Runs took between 90 and 200+ seconds, and Polaris instances with a small memory allocation were prone to crashing due to OOMs:


### Proposed change

This PR adds a new persistence API, `hasOverlappingSiblings`, which if implemented can be used to directly check for the presence of siblings at the metastore layer.

This API is implemented for the JDBC metastore in a new schema version, and some changes are made to account for an evolving schema version now and in the future.

This implementation breaks a location down into components and queries for a sibling at each of those locations, so a new table at location `s3://bucket/root/n1/nA/t1/` will require checking for an entity with location `s3://bucket/`, `s3://bucket/root/`, `s3://bucket/root/n1/`, `s3://bucket/root/n1/nA/`, and finally `s3://bucket/root/n1/nA/t1/%`. All of this can be done in a single query which makes a single pass over the data. 

The query is optimized by the introduction of a new index over a new _location_ column.

With the changes enabled, I tested creating 5000 sibling tables:

Latency is stable over time, and runs consistently completed in less than 30 seconds. I did not observe any OOMs when testing with the feature enabled.

* Add SUPPORTED_EXTERNAL_CATALOG_AUTHENTICATION_TYPES feature configuration (apache#1931)

* Add SUPPORTED_FEDERATION_AUTHENTICATION_TYPES feature configuration

* Add unit tests

* Update Helm chart version (apache#1957)

* Remove the maintainer list in Helm Chart README (apache#1962)

* Use multi-lines instead of single line (apache#1961)

* Fix invalid sample script in CLI doc (apache#1964)

* Fix hugo blockquote (apache#1967)

* Fix hugo blockquote

* Add license header

* Fix lint rules (apache#1953)

* Mutable objects used for immutable values (apache#1596)

* fix: Only include project LICENSE and NOTICE in Spark Client Jar (apache#1950)

* Add Sushant as a collaborator (apache#1956)

* Adds missing Google Flatbuffers license information (apache#1968)

* fix: Typo in Spark Client Build File (apache#1969)

debugrmation

* Python code format (apache#1954)

* test(integration): refactor PolarisRestCatalogIntegrationTest to run against any cloud provider (apache#1934)

* Make Catalog Integration Test suite cloud native

* Fix admin tool doc (apache#1977)

* Fix admin tool doc

* Fix admin tool doc

* Update release-guide.md (apache#1927)

* Add relational-jdbc to helm (apache#1937)


Motivation for the Change

Polaris needs to support relational-jdbc as the default persistence type for simpler database configuration and better cloud-native deployment experience.
Description of the Status Quo (Current Behavior)

Currently, the Helm chart only supports eclipse-link persistence type as the default, which requires complex JPA configuration with persistence.xml files.
Desired Behavior

    Add relational-jdbc persistence type support to Helm chart
    Use relational-jdbc as the default persistence type
    Inject JDBC configuration (username, password, jdbc_url) through Kubernetes Secrets as environment variables
    Maintain backward compatibility with eclipse-link

Additional Details

    Updated persistence-values.yaml for CI testing
    Updated test coverage for relational-jdbc configuration
    JDBC credentials are injected via QUARKUS_DATASOURCE_* environment variables from Secret
    Secret keys: username, password, jdbc_url

* Add CHANGELOG (apache#1952)

* Add rudimentary CHANGELOG.md

* Add the Jetbrains Changelog Gradle plugin to help managing CHANGELOG.md

* Share Polaris Community Meeting for 2025-06-26 (apache#1978)

* Correct javadoc text in generateOverlapQuery() (apache#1975)

* Fix javadoc warning: invalid input: '&'
* Correct javadoc text in generateOverlapQuery()

* Do not serialize null properties in the management model (apache#1955)

* Ignore null values in JSON output

* This may have an impact on existing client, but it is not
  likely to be substantial because normally absent properties
  should be treated the same as having `null` values.

* This change enables adding new optional fields to the
  Management API while maintaining backward compatibility in
  the future: New properties will not be exposed to clients
  unless a value for them in explicitly set.

* Add OpenHFT in Spark plugin LICENSE (apache#1979)

* Add additional unit and integration tests for etag functionality (apache#1972)

* Additional unit test for Etags

* Added a few corner case IT tests for testing etags with schema changes.

* Added IT tests to test changes after DDL and DML

* Add options to the bootstrap command to specify a schema file (apache#1942)

Instead of always using the hardcoded `schema-v1.sql` file, it would be nice if users could specify a file to bootstrap from. This is especially relevant after apache#1686 which proposes to add a new "version" of the schema.

* Added support for `s3a` scheme (apache#1932)

* Fix the sign failure (apache#1926)

* Fix doc to remove outdated note about fine-grained access controls support (apache#1983)

Minor update for the access control doc:

1. Remove the misleading section on privileges can only be granted at catalog level. I've tested the fine-grained access controls and confirmed that privileges can be applied to an individual table in the catalog.

* Add support for catalog federation in the CLI (apache#1912)

The CLI currently only supports the version of EXTERNAL catalogs that was present in 0.9.0. Now, EXTERNAL catalogs can be configured with various configurations relating to federation. This PR updates the CLI to better match the REST API so that federated catalogs can be easily set up in the CLI.

* fix: Remove db-kind in helm chart (apache#1987)

* Add a Spark session builder for the tests (apache#1985)

* Fix doc for CLI update (apache#1994)

PR for apache#1866

* Improve createPrincipal example in API docs (apache#1992)

In apache#1929 it was pointed out that the example in the Polaris docs suggests that users can provide a client ID during principal creation:

. . .


This PR attempts to fix this by adding an explicit example to the spec.

* Add doc for repair option (apache#1993)

PR for apache#1864

* Refactor relationalJdbc in helm (apache#1996)

* Add regression test coverage for Spark Client with package conf (apache#1997)

* Remove unnecessary `InputStream.close` call (apache#1982)

apache#1942 changed the way that the bootstrap init script is handled, but it added an extra `InputStream.close` call that shouldn't be needed after the BufferedReader [here](https://github.com/apache/polaris/pull/1942/files#diff-de43b240b5b5e07aba7e89f5515a417cefd908845b85432f3fcc0819911f3e2eR89) is closed. This PR removes that extra call.

* Materialize Realm ID for Session Supplier in JDBC (apache#1988)

It was discovered that the Session Supplier maps used in the MetaStoreManagerFactory implementations were passing in RealmContext objects to the supplier directly and then using the RealmContext objects to create BasePersistence implementation objects within the supplier. This supplier is cached on a per-realm basis in most MetaStoreManagerFactory implementations. RealmContext objects are request-scoped beans.

As a result, if any work is being done outside the scope of the request, such as during a Task, any calls to getOrCreateSessionSupplier for creating a BasePersistence implementation will fail as the RealmContext object is no longer available.

This PR will ensure for the JdbcMetaStoreManagerFactory that the Realm ID is materialized from the RealmContext and used inside the supplier so that the potentially deactivated RealmContext object does not need to be used in creating the BasePersistence object. Given that we are caching on a per-realm basis, this should not introduce any unforeseen behavior for the JdbcMetaStoreManagerFactory as the Realm ID must match exactly for the same supplier to be returned from the Session Supplier map.

* rebase/changes

* minor refactoring

* Last merged commit 8fa6bf2

---------

Co-authored-by: Yun Zou <[email protected]>
Co-authored-by: Christopher Lambert <[email protected]>
Co-authored-by: Rulin Xing <[email protected]>
Co-authored-by: MonkeyCanCode <[email protected]>
Co-authored-by: Alexandre Dutra <[email protected]>
Co-authored-by: Andrew Guterman <[email protected]>
Co-authored-by: Eric Maynard <[email protected]>
Co-authored-by: Pooja Nilangekar <[email protected]>
Co-authored-by: Yufei Gu <[email protected]>
Co-authored-by: fabio-rizzo-01 <[email protected]>
Co-authored-by: Russell Spitzer <[email protected]>
Co-authored-by: Sushant Raikar <[email protected]>
Co-authored-by: Jiwon Park <[email protected]>
Co-authored-by: Dmitri Bourlatchkov <[email protected]>
Co-authored-by: JB Onofré <[email protected]>
Co-authored-by: Sandhya Sundaresan <[email protected]>
Co-authored-by: Pavan Lanka <[email protected]>
Co-authored-by: CG <[email protected]>
Co-authored-by: Adnan Hemani <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants