-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
normalization issue #66
Comments
The trick is to add epsilon to the STD as done here: https://github.com/guillaume-chevalier/seq2seq-signal-prediction/blob/25721c0cd8e1ff1d9310f95ccebde56d5b0c26a1/datasets.py#L135 Or use an |
But if i add the epsilon to the STD , the loss will be very large,like 144ms/step - loss: 1139874524.9530. |
perhaps your values aren't real zeros, but instead are near-zero values, and normalizing them makes them not numerically stable. You could clip small values to an exact zero if you need to really ignore them. Dividing a real zero by epsilon should result in having a real zero again. |
I have many 0 data in my train data, the function normalise_windows will have the issues: float division by zero. The issue happened at : 'normalised_col = [((float(p) / float(window[0, col_i])) - 1) for p in window[:, col_i]]'. So if i have many 0 data in my train data. How can I normalization my data?
The text was updated successfully, but these errors were encountered: