-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nan/ changed values in output when only reading data, saving and reading again #5490
Comments
Are your input files on (exactly) the same grid? If not, combining the files might introduce In [1]: import numpy as np
In [2]: np.nan == np.nan
Out[2]: False Which is as it should be per IEEE 754. When writing out the files to netCDF, do you accidentally convert from 64bit float to 32bit float? |
Yes, they are generated on a .25x.25 lat lon grid in europe, so these values match (when reading the original files there is no nan, which I think excludes this option) The test is all q values are the same is not meant for the case where I even find nan, but where I don't see them. I should have included the output I get - see below e.q. for the last test I ran. It say that both original and read back in are F32 - that's what confuses me. I also expected to see a difference in data type to be responsible, but at first glance here it does not seem to be the case. Below that output I print a timespan of the original and the second dataset, where the values clearly differ - in the last few digits. I can also include the test, where it even returns nan at some places. The full testing code and data is in the link if you want to see that - or I can post it here.
|
I've checked your example files. This is mostly related to the fact, that the original data is encoded as In [35]: ds_loc.q.encoding
Out[35]:
{'source': '/private/tmp/test_xarray/Minimal_test_data/2012_europe_9_130_131_132_133_135.nc',
'original_shape': (720, 26, 36, 41),
'dtype': dtype('int16'),
'missing_value': -32767,
'_FillValue': -32767,
'scale_factor': 3.0672840096982675e-07,
'add_offset': 0.010050721147263318} Probably the scaling and adding is carried out in |
related to that there's also #5082 which proposes to drop the encoding more aggressively. |
Is there a way to avoid this by not scaling/adding in the first place? If only the integer values were read, selected by index and saved again this should then not happen anymore, right? I could try decode_cf=False for this... |
@lthUniBonn You would need to use |
Xref: #5739 |
This is indeed an issue with That is not a problem per se, but those attributes are obviously different for different files. When concatenating only the first files's attributes survive. That might already be the source of the above problem, as it might slightly change values. An even bigger problem is, when the dynamic range of the decoded data (min/max) doesn't overlap. Then the data might be folded from the lower border to the upper border or vica versa. I've put an example into #5739. The suggestion for now is as @keewis comment to drop encoding in such cases and use floating point values for writing. You might use the available compression options for floating point data. |
It seems the situation is clear. Please reopen if there is more to discuss. |
What happened: When combining monthly ERA5 data and saving it individually for single locations, different values/nan values appear when reading the single location file back in.
What you expected to happen: Both should be the same. This works, e.g. when only one month is read.
Minimal Complete Verifiable Example:
Anything else we need to know?: I tested this using these two months - many times saving the output works, or the values are slightly different (in the 6th digit). Using a larger timespan (2010-2012) even nan values appear. This issue is not clearly restricted to the q variable, I've not yet found the pattern.
I've included a more detailed assessment (output, data, code)
at https://uni-bonn.sciebo.de/s/OLHhid8zJg65IFB
I'm not sure where the issue might come from, but as the data is read in correctly at first, it does not seem to be on that side - which would then only leave the process of writing the netcdf output in xarray. I've tested this for a few years and for two months I always get the result, that not all q values are the same. I'm not sure where the problem might be, so I'm not sure where to start for a more minimal example. Hope this is ok.
Cheers, Lavinia
Environment:
INSTALLED VERSIONS
commit: None
python: 3.9.4 | packaged by conda-forge | (default, May 10 2021, 22:13:33)
[GCC 9.3.0]
python-bits: 64
OS: Linux
OS-release: 3.10.0-1160.25.1.el7.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.utf8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.10.6
libnetcdf: 4.8.0
xarray: 0.18.2
pandas: 1.2.4
numpy: 1.20.3
scipy: 1.6.3
netCDF4: 1.5.6
pydap: None
h5netcdf: None
h5py: None
Nio: None
zarr: None
cftime: 1.5.0
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: None
dask: 2021.06.0
distributed: 2021.06.0
matplotlib: 3.4.2
cartopy: None
seaborn: None
numbagg: None
pint: None
setuptools: 49.6.0.post20210108
pip: 21.1.2
conda: None
pytest: None
IPython: None
sphinx: None
The text was updated successfully, but these errors were encountered: