-
Notifications
You must be signed in to change notification settings - Fork 840
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SE is not working properly after 4th layer #80
Comments
Could you give more details about your problem, e.g. network architecture, task? |
Thank you fast reply. Model architecture : ResNext-101, HMDB51 dataset I am using SE module for 3D CNN action recognition to detect human actions from videos As i mentioned SE is not working fine only after 4th layer, in all other cases it is working fine In case of SE after 4th layer my result is lower. Problem i got is below: This val_epoch resulst are in normal case: Val_Epoch: [1][1/47] Time 2.534 (2.534) Data 2.093 (2.093) Loss 3.7578 (3.7578) Acc 0.094 (0.094) SE after 4th layer: Val_Epoch: [1][1/47] Time 2.436 (2.436) Data 1.991 (1.991) Loss 3.8625 (3.8625) Acc 0.031 (0.031) As you can see in 2nd case val_epoch value is lower than 1st case. However, in case of https://github.com/kenshohara/3D-ResNets-PyTorch (the same with 1st code) SE worked fine even after 4th layer, in spite of the ResNext-101 model is the same. I could not find the reason from the code. I am not understanding how this could possible. |
Hello,
SE layer is working fine after 1st, 2nd and 3rd layers.
However, when I apply it after 4th layer my test accuracy is lower than usual(about 2%). My train percentage is getting started from lower value.
I could not find the reason.
If i could explain and if you understand the question, what do you think about this issue?
Do you have any explanation?
Thank you
The text was updated successfully, but these errors were encountered: