Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate the effect of different weightings for vertical levels #6

Open
sadamov opened this issue Jan 26, 2024 · 0 comments
Open
Labels
good first issue Good for newcomers help wanted Extra attention is needed

Comments

@sadamov
Copy link
Collaborator

sadamov commented Jan 26, 2024

The higher vertical levels show poor skill during inference/forecasting. This is a common issue and is often solved by using a weighting scheme during training. I suggest to downweight the highest layers. But of course one can also try the opposite and emphasize the models attention (weights) in these upper layers.

  1. change the weights in the constants.py script, currently they are all set to one
# Vertical level weights
level_weights = {
    1: 1,
    5: 1,
    13: 1,
    22: 1,
    38: 1,
    41: 1,
    60: 1,
}
  1. start a training with 40 epochs and validation every 20 epochs
  2. compare loss curves to previous model runs on wandb
@sadamov sadamov added good first issue Good for newcomers help wanted Extra attention is needed labels Jan 26, 2024
sadamov added a commit that referenced this issue Mar 1, 2024
- Added linting and formatting using pre-commits
- Trigger pre-commit on main-branch push
- Change line length to 80
- Fixing existing formatting issues
- Added section about development with pre-commits
---------

Co-authored-by: joeloskarsson <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

1 participant