Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add chunk-friendly code path to encode_cf_datetime and encode_cf_timedelta #8575

Merged
merged 29 commits into from
Jan 29, 2024
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
644cc4d
Add proof of concept dask-friendly datetime encoding
spencerkclark Dec 29, 2023
2558164
Add dask support for timedelta encoding and more tests
spencerkclark Dec 31, 2023
180dbcd
Minor error message edits; add what's new entry
spencerkclark Dec 31, 2023
edd587c
Add return type for new tests
spencerkclark Dec 31, 2023
2a243dd
Fix typo in what's new
spencerkclark Dec 31, 2023
a28bc2a
Add what's new entry for update following #8542
spencerkclark Dec 31, 2023
8ede271
Add full type hints to encoding functions
spencerkclark Dec 31, 2023
eea3bb7
Combine datetime64 and timedelta64 zarr tests; add cftime zarr test
spencerkclark Jan 1, 2024
207a725
Minor edits to what's new
spencerkclark Jan 1, 2024
2797f1c
Address initial review comments
spencerkclark Jan 2, 2024
eed6ab7
Add proof of concept dask-friendly datetime encoding
spencerkclark Dec 29, 2023
bf4da23
Add dask support for timedelta encoding and more tests
spencerkclark Dec 31, 2023
23081b6
Minor error message edits; add what's new entry
spencerkclark Dec 31, 2023
b65dfab
Add return type for new tests
spencerkclark Dec 31, 2023
d4071c9
Fix typo in what's new
spencerkclark Dec 31, 2023
141f1c2
Add what's new entry for update following #8542
spencerkclark Dec 31, 2023
2e56529
Add full type hints to encoding functions
spencerkclark Dec 31, 2023
5271842
Combine datetime64 and timedelta64 zarr tests; add cftime zarr test
spencerkclark Jan 1, 2024
bf1f6ba
Minor edits to what's new
spencerkclark Jan 1, 2024
63c32d8
Address initial review comments
spencerkclark Jan 2, 2024
7230c1a
Initial work toward addressing typing comments
spencerkclark Jan 21, 2024
9d12948
Restore covariant=True in T_DuckArray; add type: ignores
spencerkclark Jan 27, 2024
a1c8133
Tweak netCDF3 error message
spencerkclark Jan 27, 2024
4b1e978
Merge branch 'dask-friendly-datetime-encoding' of https://github.com/…
spencerkclark Jan 27, 2024
2f59fbe
Merge branch 'main' into dask-friendly-datetime-encoding
spencerkclark Jan 27, 2024
76761ba
Move what's new entry
spencerkclark Jan 27, 2024
52d6428
Remove extraneous text from merge in what's new
spencerkclark Jan 27, 2024
6b4127e
Remove unused type: ignore comment
spencerkclark Jan 27, 2024
d9d9701
Remove word from netCDF3 error message
spencerkclark Jan 27, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions doc/whats-new.rst
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,15 @@ Bug fixes

- Reverse index output of bottleneck's rolling move_argmax/move_argmin functions (:issue:`8541`, :pull:`8552`).
By `Kai Mühlbauer <https://github.com/kmuehlbauer>`_.
- Preserve chunks when writing time-like variables to zarr by enabling their
lazy encoding (:issue:`7132`, :issue:`8230`, :issue:`8432`, :pull:`8253`,
:pull:`8575`; see also discussion in :pull:`8253`). By `Spencer Clark
<https://github.com/spencerkclark>`_ and `Mattia Almansi
<https://github.com/malmans2>`_.
- Raise an informative error if dtype encoding of time-like variables would
lead to integer overflow or unsafe conversion from floating point to integer
values (:issue:`8542`, :pull:`8575`). By `Spencer Clark
<https://github.com/spencerkclark>`_.


Documentation
Expand Down
195 changes: 187 additions & 8 deletions xarray/coding/times.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
from collections.abc import Hashable
from datetime import datetime, timedelta
from functools import partial
from typing import TYPE_CHECKING, Callable, Union
from typing import TYPE_CHECKING, Callable, TypeVar, Union

import numpy as np
import pandas as pd
Expand All @@ -22,6 +22,7 @@
)
from xarray.core import indexing
from xarray.core.common import contains_cftime_datetimes, is_np_datetime_like
from xarray.core.duck_array_ops import asarray
from xarray.core.formatting import first_n_items, format_timestamp, last_item
from xarray.core.pdcompat import nanosecond_precision_timestamp
from xarray.core.pycompat import is_duck_dask_array
Expand All @@ -38,6 +39,14 @@

T_Name = Union[Hashable, None]

# Copied from types.py to avoid a circular import
try:
from dask.array import Array as DaskArray
except ImportError:
DaskArray = np.ndarray # type: ignore

T_NumPyOrDaskArray = TypeVar("T_NumPyOrDaskArray", np.ndarray, DaskArray)

# standard calendars recognized by cftime
_STANDARD_CALENDARS = {"standard", "gregorian", "proleptic_gregorian"}

Expand Down Expand Up @@ -667,12 +676,48 @@ def _division(deltas, delta, floor):
return num


def _cast_to_dtype_if_safe(num: np.ndarray, dtype: np.dtype) -> np.ndarray:
with warnings.catch_warnings():
warnings.filterwarnings("ignore", message="overflow")
cast_num = np.asarray(num, dtype=dtype)

if np.issubdtype(dtype, np.integer):
if not (num == cast_num).all():
if np.issubdtype(num.dtype, np.floating):
raise ValueError(
f"Not possible to cast all encoded times from "
f"{num.dtype!r} to {dtype!r} without losing precision. "
f"Consider modifying the units such that integer values "
f"can be used, or removing the units and dtype encoding, "
f"at which point xarray will make an appropriate choice."
)
else:
raise OverflowError(
f"Not possible to cast encoded times from "
f"{num.dtype!r} to {dtype!r} without overflow. Consider "
f"removing the dtype encoding, at which point xarray will "
f"make an appropriate choice, or explicitly switching to "
"a larger integer dtype."
)
else:
if np.isinf(cast_num).any():
raise OverflowError(
f"Not possible to cast encoded times from {num.dtype!r} to "
f"{dtype!r} without overflow. Consider removing the dtype "
f"encoding, at which point xarray will make an appropriate "
f"choice, or explicitly switching to a larger floating point "
f"dtype."
)

return cast_num


def encode_cf_datetime(
dates,
dates: T_NumPyOrDaskArray,
units: str | None = None,
calendar: str | None = None,
dtype: np.dtype | None = None,
) -> tuple[np.ndarray, str, str]:
) -> tuple[T_NumPyOrDaskArray, str, str]:
"""Given an array of datetime objects, returns the tuple `(num, units,
calendar)` suitable for a CF compliant time variable.

Expand All @@ -682,6 +727,24 @@ def encode_cf_datetime(
--------
cftime.date2num
"""
dates = asarray(dates)
if isinstance(dates, np.ndarray):
return _eagerly_encode_cf_datetime(dates, units, calendar, dtype)
elif is_duck_dask_array(dates):
return _lazily_encode_cf_datetime(dates, units, calendar, dtype)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if isinstance(dates, np.ndarray):
return _eagerly_encode_cf_datetime(dates, units, calendar, dtype)
elif is_duck_dask_array(dates):
return _lazily_encode_cf_datetime(dates, units, calendar, dtype)
if is_chunked_array(dates):
return _lazily_encode_cf_datetime(dates, units, calendar, dtype)
elif is_duck_array(dates):
return _eagerly_encode_cf_datetime(dates, units, calendar, dtype)

Is there a reason other arrays might fail?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, I do not think so. It's mainly that mypy complains if the function may not always return something:

xarray/coding/times.py: note: In function "encode_cf_datetime":
xarray/coding/times.py:728: error: Missing return statement  [return]

I suppose instead of elif is_duck_array(dates) I could simply use else, since I use asarray above?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so

Copy link
Member Author

@spencerkclark spencerkclark Jan 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess one issue that occurs to me is that as written I think _eagerly_encode_cf_datetime will always return a NumPy array, due to its reliance on doing calculations through pandas or cftime. In other words it won't necessarily fail, but it will not work quite like the other array types, where the type you put in is what you get out.

else:
raise ValueError(
f"dates provided have an unsupported array type: {type(dates)}."
)


def _eagerly_encode_cf_datetime(
dates: np.ndarray,
spencerkclark marked this conversation as resolved.
Show resolved Hide resolved
units: str | None = None,
calendar: str | None = None,
dtype: np.dtype | None = None,
allow_units_modification: bool = True,
) -> tuple[np.ndarray, str, str]:
dates = np.asarray(dates)

data_units = infer_datetime_units(dates)
Expand Down Expand Up @@ -731,7 +794,7 @@ def encode_cf_datetime(
f"Set encoding['dtype'] to integer dtype to serialize to int64. "
f"Set encoding['dtype'] to floating point dtype to silence this warning."
)
elif np.issubdtype(dtype, np.integer):
elif np.issubdtype(dtype, np.integer) and allow_units_modification:
new_units = f"{needed_units} since {format_timestamp(ref_date)}"
emit_user_level_warning(
f"Times can't be serialized faithfully to int64 with requested units {units!r}. "
Expand All @@ -752,11 +815,84 @@ def encode_cf_datetime(
# we already covered for this in pandas-based flow
num = cast_to_int_if_safe(num)

return (num, units, calendar)
if dtype is not None:
num = _cast_to_dtype_if_safe(num, dtype)

return num, units, calendar


def _encode_cf_datetime_within_map_blocks(
dates: np.ndarray,
units: str,
calendar: str,
dtype: np.dtype,
) -> np.ndarray:
num, *_ = _eagerly_encode_cf_datetime(
dates, units, calendar, dtype, allow_units_modification=False
)
return num


def _lazily_encode_cf_datetime(
dates: DaskArray,
units: str | None = None,
calendar: str | None = None,
dtype: np.dtype | None = None,
) -> tuple[DaskArray, str, str]:
import dask.array

if calendar is None:
# This will only trigger minor compute if dates is an object dtype array.
calendar = infer_calendar_name(dates)

if units is None and dtype is None:
if dates.dtype == "O":
units = "microseconds since 1970-01-01"
dtype = np.dtype("int64")
else:
units = "nanoseconds since 1970-01-01"
dtype = np.dtype("int64")

if units is None or dtype is None:
raise ValueError(
f"When encoding chunked arrays of datetime values, both the units "
f"and dtype must be prescribed or both must be unprescribed. "
f"Prescribing only one or the other is not currently supported. "
f"Got a units encoding of {units} and a dtype encoding of {dtype}."
)

num = dask.array.map_blocks(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
num = dask.array.map_blocks(
num = chunkmanager.map_blocks(

There's achunkmanager dance now to handle both dask and cubed

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah that's neat! Is there anything special I need to do to test this or is it sufficient to demonstrate that it at least works with dask (as my tests already do)?

One other minor detail is how to handle typing with this. I gave it a shot (hopefully I'm in the right ballpark), but I still encounter this error:

xarray/coding/times.py: note: In function "_lazily_encode_cf_datetime":
xarray/coding/times.py:860: error: "T_ChunkedArray" has no attribute "dtype"  [attr-defined]

I'm not a typing expert so any guidance would be appreciated!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to do to test this or is it sufficient to demonstrate that it at least works with dask (as my tests already do)

I believe the dask tests are fine since the cubed tests live elsewhere.

error: "T_ChunkedArray" has no attribute "dtype" [attr-defined

cc @TomNicholas

Copy link
Member

@TomNicholas TomNicholas Jan 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the dask tests are fine since the cubed tests live elsewhere.

Yep.

Is there anything special I need to do

The correct ChunkManager for the type of the chunked array needs to be detected, but you've already done that bit!

error: "T_ChunkedArray" has no attribute "dtype" [attr-defined

This can't be typed properly yet (because we don't yet have a full protocol to describe the T_DuckArray in xarray.core.types.py), but try changing the definition of T_ChunkedArray in parallelcompat.py to this

T_ChunkedArray = TypeVar("T_ChunkedArray", bound=Any)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha, thanks for the typing guidance @TomNicholas. Adding bound=Any to the T_ChunkedArray definition indeed solves the dtype attribute issue. It seems I am not able to use T_DuckArray as an input type for a function, however — I'm getting errors like this:

error: Cannot use a covariant type variable as a parameter  [misc]

Do you have any thoughts on how to handle that? I'm guessing the covariant=True parameter was added for a reason in its definition?

T_DuckArray = TypeVar("T_DuckArray", bound=Any, covariant=True)

Does #8575 (comment) make the output side of this more difficult to handle as well (at least if we want to be as accurate as possible)?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TomNicholas do you have any thoughts on this? From my perspective the only thing holding this PR up at this point is typing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO a type: ignore is fine.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey, sorry for forgetting about this @spencerkclark !

I'm guessing the covariant=True parameter was added for a reason in its definition?

Yes - that fixes some typing errors in the ChunkManager methods IIRC. Unfortunately I don't yet understand how to make that work and also fix your error 🙁

the only thing holding this PR up at this point is typing.

I agree with Deepak that typing should not hold this up. Even if it needs some ignores then the typing you've added is still very useful to indicate intent!

_encode_cf_datetime_within_map_blocks,
dates,
units,
calendar,
dtype,
dtype=dtype,
)
return num, units, calendar


def encode_cf_timedelta(
timedeltas, units: str | None = None, dtype: np.dtype | None = None
timedeltas: T_NumPyOrDaskArray,
units: str | None = None,
dtype: np.dtype | None = None,
) -> tuple[T_NumPyOrDaskArray, str]:
timedeltas = asarray(timedeltas)
if isinstance(timedeltas, np.ndarray):
spencerkclark marked this conversation as resolved.
Show resolved Hide resolved
return _eagerly_encode_cf_timedelta(timedeltas, units, dtype)
elif is_duck_dask_array(timedeltas):
return _lazily_encode_cf_timedelta(timedeltas, units, dtype)
else:
raise ValueError(
f"timedeltas provided have an unsupported array type: {type(timedeltas)}."
)


def _eagerly_encode_cf_timedelta(
timedeltas: np.ndarray,
units: str | None = None,
dtype: np.dtype | None = None,
allow_units_modification: bool = True,
) -> tuple[np.ndarray, str]:
data_units = infer_timedelta_units(timedeltas)

Expand Down Expand Up @@ -784,7 +920,7 @@ def encode_cf_timedelta(
f"Set encoding['dtype'] to integer dtype to serialize to int64. "
f"Set encoding['dtype'] to floating point dtype to silence this warning."
)
elif np.issubdtype(dtype, np.integer):
elif np.issubdtype(dtype, np.integer) and allow_units_modification:
emit_user_level_warning(
f"Timedeltas can't be serialized faithfully with requested units {units!r}. "
f"Serializing with units {needed_units!r} instead. "
Expand All @@ -797,7 +933,50 @@ def encode_cf_timedelta(

num = _division(time_deltas, time_delta, floor_division)
num = num.values.reshape(timedeltas.shape)
return (num, units)

if dtype is not None:
num = _cast_to_dtype_if_safe(num, dtype)

return num, units


def _encode_cf_timedelta_within_map_blocks(
timedeltas: np.ndarray,
units: str,
dtype: np.dtype,
) -> np.ndarray:
num, _ = _eagerly_encode_cf_timedelta(
timedeltas, units, dtype, allow_units_modification=False
)
return num


def _lazily_encode_cf_timedelta(
timedeltas: DaskArray, units: str | None = None, dtype: np.dtype | None = None
) -> tuple[DaskArray, str]:
import dask.array

if units is None and dtype is None:
units = "nanoseconds"
dtype = np.dtype("int64")

if units is None or dtype is None:
raise ValueError(
f"When encoding chunked arrays of timedelta values, both the "
f"units and dtype must be prescribed or both must be "
f"unprescribed. Prescribing only one or the other is not "
f"currently supported. Got a units encoding of {units} and a "
f"dtype encoding of {dtype}."
)

num = dask.array.map_blocks(
spencerkclark marked this conversation as resolved.
Show resolved Hide resolved
_encode_cf_timedelta_within_map_blocks,
timedeltas,
units,
dtype,
dtype=dtype,
)
return num, units


class CFDatetimeCoder(VariableCoder):
Expand Down
23 changes: 23 additions & 0 deletions xarray/tests/test_backends.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@
)
from xarray.backends.pydap_ import PydapDataStore
from xarray.backends.scipy_ import ScipyBackendEntrypoint
from xarray.coding.cftime_offsets import cftime_range
from xarray.coding.strings import check_vlen_dtype, create_vlen_dtype
from xarray.coding.variables import SerializationWarning
from xarray.conventions import encode_dataset_coordinates
Expand Down Expand Up @@ -2809,6 +2810,28 @@
with pytest.raises(TypeError, match=r"Invalid attribute in Dataset.attrs."):
ds.to_zarr(store_target, **self.version_kwargs)

@requires_dask
@pytest.mark.parametrize("dtype", ["datetime64[ns]", "timedelta64[ns]"])
def test_chunked_datetime64_or_timedelta64(self, dtype) -> None:
# Generalized from @malmans2's test in PR #8253
original = create_test_data().astype(dtype).chunk(1)
with self.roundtrip(original, open_kwargs={"chunks": {}}) as actual:
for name, actual_var in actual.variables.items():
assert original[name].chunks == actual_var.chunks
assert original.chunks == actual.chunks

@requires_cftime
@requires_dask
def test_chunked_cftime_datetime(self) -> None:
# Based on @malmans2's test in PR #8253
times = cftime_range("2000", freq="D", periods=3)
original = xr.Dataset(data_vars={"chunked_times": (["time"], times)})
original = original.chunk({"time": 1})
with self.roundtrip(original, open_kwargs={"chunks": {}}) as actual:
for name, actual_var in actual.variables.items():
assert original[name].chunks == actual_var.chunks
assert original.chunks == actual.chunks

def test_vectorized_indexing_negative_step(self) -> None:
if not has_dask:
pytest.xfail(
Expand Down Expand Up @@ -4343,14 +4366,14 @@
with open_dataset(tmp, chunks=chunks) as dask_ds:
assert_identical(data, dask_ds)
with create_tmp_file() as tmp2:
dask_ds.to_netcdf(tmp2)

Check failure on line 4369 in xarray/tests/test_backends.py

View workflow job for this annotation

GitHub Actions / macos-latest py3.11

TestDask.test_dask_roundtrip Failed: Timeout >180.0s
with open_dataset(tmp2) as on_disk:
assert_identical(data, on_disk)

def test_deterministic_names(self) -> None:
with create_tmp_file() as tmp:
data = create_test_data()
data.to_netcdf(tmp)

Check failure on line 4376 in xarray/tests/test_backends.py

View workflow job for this annotation

GitHub Actions / macos-latest py3.11

TestDask.test_deterministic_names Failed: Timeout >180.0s
with open_mfdataset(tmp, combine="by_coords") as ds:
original_names = {k: v.data.name for k, v in ds.data_vars.items()}
with open_mfdataset(tmp, combine="by_coords") as ds:
Expand All @@ -4368,7 +4391,7 @@
computed = actual.compute()
assert not actual._in_memory
assert computed._in_memory
assert_allclose(actual, computed, decode_bytes=False)

Check failure on line 4394 in xarray/tests/test_backends.py

View workflow job for this annotation

GitHub Actions / macos-latest py3.11

TestDask.test_dataarray_compute Failed: Timeout >180.0s

def test_save_mfdataset_compute_false_roundtrip(self) -> None:
from dask.delayed import Delayed
Expand All @@ -4381,7 +4404,7 @@
datasets, [tmp1, tmp2], engine=self.engine, compute=False
)
assert isinstance(delayed_obj, Delayed)
delayed_obj.compute()

Check failure on line 4407 in xarray/tests/test_backends.py

View workflow job for this annotation

GitHub Actions / macos-latest py3.11

TestDask.test_save_mfdataset_compute_false_roundtrip Failed: Timeout >180.0s
with open_mfdataset(
[tmp1, tmp2], combine="nested", concat_dim="x"
) as actual:
Expand All @@ -4390,7 +4413,7 @@
def test_load_dataset(self) -> None:
with create_tmp_file() as tmp:
original = Dataset({"foo": ("x", np.random.randn(10))})
original.to_netcdf(tmp)

Check failure on line 4416 in xarray/tests/test_backends.py

View workflow job for this annotation

GitHub Actions / macos-latest py3.11

TestDask.test_load_dataset Failed: Timeout >180.0s
ds = load_dataset(tmp)
# this would fail if we used open_dataset instead of load_dataset
ds.to_netcdf(tmp)
Expand All @@ -4398,7 +4421,7 @@
def test_load_dataarray(self) -> None:
with create_tmp_file() as tmp:
original = Dataset({"foo": ("x", np.random.randn(10))})
original.to_netcdf(tmp)

Check failure on line 4424 in xarray/tests/test_backends.py

View workflow job for this annotation

GitHub Actions / macos-latest py3.11

TestDask.test_load_dataarray Failed: Timeout >180.0s
ds = load_dataarray(tmp)
# this would fail if we used open_dataarray instead of
# load_dataarray
Expand All @@ -4411,7 +4434,7 @@
def test_inline_array(self) -> None:
with create_tmp_file() as tmp:
original = Dataset({"foo": ("x", np.random.randn(10))})
original.to_netcdf(tmp)

Check failure on line 4437 in xarray/tests/test_backends.py

View workflow job for this annotation

GitHub Actions / macos-latest py3.11

TestDask.test_inline_array Failed: Timeout >180.0s
chunks = {"time": 10}

def num_graph_nodes(obj):
Expand Down Expand Up @@ -4458,7 +4481,7 @@
yield actual, expected

def test_cmp_local_file(self) -> None:
with self.create_datasets() as (actual, expected):

Check failure on line 4484 in xarray/tests/test_backends.py

View workflow job for this annotation

GitHub Actions / macos-latest py3.11

TestPydap.test_cmp_local_file Failed: Timeout >180.0s
assert_equal(actual, expected)

# global attributes should be global attributes on the dataset
Expand Down Expand Up @@ -4492,7 +4515,7 @@

def test_compatible_to_netcdf(self) -> None:
# make sure it can be saved as a netcdf
with self.create_datasets() as (actual, expected):

Check failure on line 4518 in xarray/tests/test_backends.py

View workflow job for this annotation

GitHub Actions / macos-latest py3.11

TestPydap.test_compatible_to_netcdf Failed: Timeout >180.0s
with create_tmp_file() as tmp_file:
actual.to_netcdf(tmp_file)
with open_dataset(tmp_file) as actual2:
Expand All @@ -4501,7 +4524,7 @@

@requires_dask
def test_dask(self) -> None:
with self.create_datasets(chunks={"j": 2}) as (actual, expected):

Check failure on line 4527 in xarray/tests/test_backends.py

View workflow job for this annotation

GitHub Actions / macos-latest py3.11

TestPydap.test_dask Failed: Timeout >180.0s
assert_equal(actual, expected)


Expand Down
Loading
Loading