Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug Fix] cast None current-snapshot-id as -1 for Backwards Compatibility #473

Merged
merged 11 commits into from
Mar 6, 2024
4 changes: 4 additions & 0 deletions mkdocs/docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -249,3 +249,7 @@ catalog:
# Concurrency

PyIceberg uses multiple threads to parallelize operations. The number of workers can be configured by supplying a `max-workers` entry in the configuration file, or by setting the `PYICEBERG_MAX_WORKERS` environment variable. The default value depends on the system hardware and Python version. See [the Python documentation](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor) for more details.

# Backward Compatibility

Previous versions of Java implementations incorrectly assumes the optional attribute `current-snapshot-id` to be a required attribute in the TableMetadata. Which means that if `current-snapshot-id` is missing in the metadata file (e.g. on table creation), the application will throw an exception without being able to load the table. This assumption has been corrected in more recent Iceberg versions. However, it is possible to force PyIceberg to create table with a metadata file that will be compatible with previous versions. This can be configured by setting the `legacy-current-snapshot-id` entry as "True" in the configuration file, or by setting the `LEGACY_CURRENT_SNAPSHOT_ID` environment variable. Refer to the [PR discussion](https://github.com/apache/iceberg-python/pull/473) for more details on the issue
sungwy marked this conversation as resolved.
Show resolved Hide resolved
11 changes: 9 additions & 2 deletions pyiceberg/serializers.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,14 @@

import codecs
import gzip
import json
from abc import ABC, abstractmethod
from typing import Callable

from pyiceberg.io import InputFile, InputStream, OutputFile
from pyiceberg.table.metadata import TableMetadata, TableMetadataUtil
from pyiceberg.table.metadata import CURRENT_SNAPSHOT_ID, TableMetadata, TableMetadataUtil
from pyiceberg.typedef import UTF8
from pyiceberg.utils.config import Config

GZIP = "gzip"

Expand Down Expand Up @@ -127,6 +129,11 @@ def table_metadata(metadata: TableMetadata, output_file: OutputFile, overwrite:
overwrite (bool): Where to overwrite the file if it already exists. Defaults to `False`.
"""
with output_file.create(overwrite=overwrite) as output_stream:
json_bytes = metadata.model_dump_json().encode(UTF8)
model_dump = metadata.model_dump_json()
if Config().get_bool("legacy-current-snapshot-id") and metadata.current_snapshot_id is None:
model_dict = json.loads(model_dump)
model_dict[CURRENT_SNAPSHOT_ID] = -1
model_dump = json.dumps(model_dict)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is the best place to fix this. Mostly because we have to deserialize and serialize the metadata, and the rest of the deserialization logic is part of the Pydantic model. I think option 1 is much cleaner. We can set the ignore-None to False:

    @staticmethod
    def table_metadata(metadata: TableMetadata, output_file: OutputFile, overwrite: bool = False) -> None:
        """Write a TableMetadata instance to an output file.

        Args:
            output_file (OutputFile): A custom implementation of the iceberg.io.file.OutputFile abstract base class.
            overwrite (bool): Where to overwrite the file if it already exists. Defaults to `False`.
        """
        with output_file.create(overwrite=overwrite) as output_stream:
            json_bytes = metadata.model_dump_json(exclude_none=False).encode(UTF8)
            json_bytes = Compressor.get_compressor(output_file.location).bytes_compressor()(json_bytes)
            output_stream.write(json_bytes)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh that's a good suggestion. I'll add the field_serializer and set exclude_none to False so we can print out -1 in the output.

json_bytes = model_dump.encode(UTF8)
json_bytes = Compressor.get_compressor(output_file.location).bytes_compressor()(json_bytes)
output_stream.write(json_bytes)
2 changes: 1 addition & 1 deletion pyiceberg/table/metadata.py
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ def check_sort_orders(table_metadata: TableMetadata) -> TableMetadata:

def construct_refs(table_metadata: TableMetadata) -> TableMetadata:
"""Set the main branch if missing."""
if table_metadata.current_snapshot_id is not None:
if table_metadata.current_snapshot_id is not None and table_metadata.current_snapshot_id != -1:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this -1 check still necessary? construct_refs is an after validator. At this point, cleanup_snapshot_id should already turn current_snapshot_id=-1 to None.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right! I'll submit a fix thank you!

sungwy marked this conversation as resolved.
Show resolved Hide resolved
if MAIN_BRANCH not in table_metadata.refs:
table_metadata.refs[MAIN_BRANCH] = SnapshotRef(
snapshot_id=table_metadata.current_snapshot_id, snapshot_ref_type=SnapshotRefType.BRANCH
Expand Down
11 changes: 1 addition & 10 deletions pyiceberg/utils/concurrent.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,13 +37,4 @@ def get_or_create() -> Executor:
@staticmethod
def max_workers() -> Optional[int]:
"""Return the max number of workers configured."""
config = Config()
val = config.config.get("max-workers")

if val is None:
return None

try:
return int(val) # type: ignore
except ValueError as err:
raise ValueError(f"Max workers should be an integer or left unset. Current value: {val}") from err
return Config().get_int("max-workers")
17 changes: 17 additions & 0 deletions pyiceberg/utils/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
# under the License.
import logging
import os
from distutils.util import strtobool
from typing import List, Optional

import strictyaml
Expand Down Expand Up @@ -154,3 +155,19 @@ def get_catalog_config(self, catalog_name: str) -> Optional[RecursiveDict]:
assert isinstance(catalog_conf, dict), f"Configuration path catalogs.{catalog_name_lower} needs to be an object"
return catalog_conf
return None

sungwy marked this conversation as resolved.
Show resolved Hide resolved
def get_int(self, key: str) -> Optional[int]:
if (val := self.config.get(key)) is not None:
try:
return int(val) # type: ignore
except ValueError as err:
raise ValueError(f"{key} should be an integer or left unset. Current value: {val}") from err
return None

def get_bool(self, key: str) -> Optional[bool]:
if (val := self.config.get(key)) is not None:
try:
return strtobool(val) # type: ignore
except ValueError as err:
raise ValueError(f"{key} should be a boolean or left unset. Current value: {val}") from err
return None
50 changes: 50 additions & 0 deletions tests/test_serializers.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.

import json
import os
import uuid
from typing import Any, Dict

import pytest
from pytest_mock import MockFixture

from pyiceberg.serializers import ToOutputFile
from pyiceberg.table import StaticTable
from pyiceberg.table.metadata import TableMetadataV1


def test_legacy_current_snapshot_id(
mocker: MockFixture, tmp_path_factory: pytest.TempPathFactory, example_table_metadata_no_snapshot_v1: Dict[str, Any]
) -> None:
from pyiceberg.io.pyarrow import PyArrowFileIO

metadata_location = str(tmp_path_factory.mktemp("metadata") / f"{uuid.uuid4()}.metadata.json")
metadata = TableMetadataV1(**example_table_metadata_no_snapshot_v1)
ToOutputFile.table_metadata(metadata, PyArrowFileIO().new_output(location=metadata_location), overwrite=True)
static_table = StaticTable.from_metadata(metadata_location)
assert static_table.metadata.current_snapshot_id is None

mocker.patch.dict(os.environ, values={"PYICEBERG_LEGACY_CURRENT_SNAPSHOT_ID": "True"})

ToOutputFile.table_metadata(metadata, PyArrowFileIO().new_output(location=metadata_location), overwrite=True)
with PyArrowFileIO().new_input(location=metadata_location).open() as input_stream:
metadata_json_bytes = input_stream.read()
assert json.loads(metadata_json_bytes)['current-snapshot-id'] == -1
backwards_compatible_static_table = StaticTable.from_metadata(metadata_location)
assert backwards_compatible_static_table.metadata.current_snapshot_id is None
assert backwards_compatible_static_table.metadata == static_table.metadata
17 changes: 17 additions & 0 deletions tests/utils/test_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,3 +76,20 @@ def test_merge_config() -> None:
rhs: RecursiveDict = {"common_key": "xyz789"}
result = merge_config(lhs, rhs)
assert result["common_key"] == rhs["common_key"]


def test_from_configuration_files_get_typed_value(tmp_path_factory: pytest.TempPathFactory) -> None:
config_path = str(tmp_path_factory.mktemp("config"))
with open(f"{config_path}/.pyiceberg.yaml", "w", encoding=UTF8) as file:
yaml_str = as_document({"max-workers": "4", "legacy-current-snapshot-id": "True"}).as_yaml()
file.write(yaml_str)

os.environ["PYICEBERG_HOME"] = config_path
with pytest.raises(ValueError):
Config().get_bool("max-workers")

with pytest.raises(ValueError):
Config().get_int("legacy-current-snapshot-id")

assert Config().get_bool("legacy-current-snapshot-id")
assert Config().get_int("max-workers") == 4