Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect CRPS calculation #554

Open
1 of 2 tasks
c-lyu opened this issue Nov 29, 2024 · 1 comment
Open
1 of 2 tasks

Incorrect CRPS calculation #554

c-lyu opened this issue Nov 29, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@c-lyu
Copy link

c-lyu commented Nov 29, 2024

1. System Info

PyPOTS version: 0.8.1

This bug is independent of operating environment.

2. Information

  • The official example scripts
  • My own created scripts

3. Reproduction

The CRPS calculation seems to be incorrect. Line 318, targets is passed as the first argument, and q_pred is the second; whereas according to Line 257, the first argument should be predictions and the second should be targets.

def calc_quantile_loss(predictions, targets, q: float, eval_points) -> float:
quantile_loss = 2 * torch.sum(
torch.abs((predictions - targets) * eval_points * ((targets <= predictions) * 1.0 - q))
)
return quantile_loss
def calc_quantile_crps(
predictions: Union[np.ndarray, torch.Tensor],
targets: Union[np.ndarray, torch.Tensor],
masks: Union[np.ndarray, torch.Tensor],
scaler_mean=0,
scaler_stddev=1,
) -> float:
"""Continuous rank probability score for distributional predictions.
Parameters
----------
predictions :
The prediction data to be evaluated.
targets :
The target data for helping evaluate the predictions.
masks :
The masks for filtering the specific values in inputs and target from evaluation.
Only values at corresponding positions where values ==1 in ``masks`` will be used for evaluation.
scaler_mean:
Mean value of the scaler used to scale the data.
scaler_stddev:
Standard deviation value of the scaler used to scale the data.
Returns
-------
CRPS :
Value of continuous rank probability score.
"""
# check shapes and values of inputs
_ = _check_inputs(predictions, targets, masks, check_shape=False)
if isinstance(predictions, np.ndarray):
predictions = torch.from_numpy(predictions)
if isinstance(targets, np.ndarray):
targets = torch.from_numpy(targets)
if isinstance(masks, np.ndarray):
masks = torch.from_numpy(masks)
targets = targets * scaler_stddev + scaler_mean
predictions = predictions * scaler_stddev + scaler_mean
quantiles = np.arange(0.05, 1.0, 0.05)
denominator = torch.sum(torch.abs(targets * masks))
CRPS = torch.tensor(0.0)
for i in range(len(quantiles)):
q_pred = []
for j in range(len(predictions)):
q_pred.append(torch.quantile(predictions[j : j + 1], quantiles[i], dim=1))
q_pred = torch.cat(q_pred, 0)
q_loss = calc_quantile_loss(targets, q_pred, quantiles[i], masks)
CRPS += q_loss / denominator
return CRPS.item() / len(quantiles)

4. Expected behavior

Line 318 should be q_loss = calc_quantile_loss(q_pred, targets, quantiles[i], masks).

@c-lyu c-lyu added the bug Something isn't working label Nov 29, 2024
Copy link

Hi there 👋,

Thank you so much for your attention to PyPOTS! You can follow me on GitHub
to receive the latest news of PyPOTS. If you find PyPOTS helpful to your work, please star⭐️ this repository.
Your star is your recognition, which can help more people notice PyPOTS and grow PyPOTS community.
It matters and is definitely a kind of contribution to the community.

I have received your message and will respond ASAP. Thank you for your patience! 😃

Best,
Wenjie

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant