-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training error - huge difference (encog 3.1.0) #55
Comments
BTW: It is not a problem to reproduce the same error. It is enought to run the sample code 100x times and you can get the same problem. |
I probably know, where is the problem, I used this (basic) code and I got these outputs: The problem is, that in some situations the calculation of train.Error and network.CalculateError can generate huge difference as in my sample 0,058253% (train.Error) vs 22,434797% (CalculateError). I didn't have problem with topic huge difference in situation when I used cycle with "while (network.CalculateError(trainingSet) > 0.001);". It takes more time for training, but output of training is corrent in all situations (it would be fine to have final solution not this work-around). BTW: This problem is also in Java code, I tested C# and Java also. |
This is the way that the training code is designed. train.Error is the error at the beginning of a training iteration (before weights are updated), whereas CalculateError is the error AFTER an iteration. They will always move sort of lockstep like you have there. Your results above seem to follow this, as epoch 35's evaluated error becomes the regular error for epoch 36, same thing on 36 to the final. More info here: http://www.jeffheaton.com/2014/03/when-is-a-models-training-error-calculated/ Also, sometimes, the random weights will produce a network that cannot be trained for XOR. If it takes 100 or so runs to see a large difference, you might be seeing that case. |
Why did Epoch#36 jump from train.Error:0,058253% to a whopping Evaluated error: 22,434797%, whereas Epoch#35 decreased from train.Error:0,125270% to Evaluated error: 0,058253%, which is to be more or so expected? |
I used the common XOR sample in NN (training method ResilientPropagation and train till Error<0.001) and I got huge error after training (ideal 0, real value 0,989125420071542), see output:
Epoch #1 Error:0,403222760807917
Epoch #2 Error:0,326979855722731
...
Epoch #42 Error:0,00152763617214056
Epoch #43 Error:0,000498892283437333
Neural Network Results:
0,0, actual=0,00861768412365147,ideal=0
1,0, actual=0,982667334534116,ideal=1
0,1, actual=0,998007704200434,ideal=1
1,1, actual=0,989125420071542,ideal=0 (it seems as error]
part of source code (I used encog-dotnet-core-3.1.0):
The text was updated successfully, but these errors were encountered: