You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello author,
In your paper, the ImageNet dataset achieves an impressive accuracy of 75.92% with T=1 and ResNet-104. However, I am only able to achieve 70.56% accuracy using parameters consistent with your supplementary materials. Furthermore, when examining the events file you provided in TensorBoard, I noticed that the test set accuracy is 74.14% while the training set accuracy is only 64.25%. This discrepancy shows that the test set accuracy is 10% higher than the training set accuracy. I look forward to your response.
The text was updated successfully, but these errors were encountered:
Are you running the tests on vanilla or attention based SNN? To enable attention, you should specify the type of attention you want to use in the Config.py file of each dataset under the self.attention = "no" hyperparameter. You can set this to CA, TA, SA, CSA, TCA, TSA, TCSA, or no. After this you should get results within the margin of error. However one 'problem' I faced with with paper was the clip hyperparameter. I ran all my tests with clip as 1 as got the same results as the ones in the paper. Are you running your code using clip 1 too?
Hello author,
In your paper, the ImageNet dataset achieves an impressive accuracy of 75.92% with T=1 and ResNet-104. However, I am only able to achieve 70.56% accuracy using parameters consistent with your supplementary materials. Furthermore, when examining the events file you provided in TensorBoard, I noticed that the test set accuracy is 74.14% while the training set accuracy is only 64.25%. This discrepancy shows that the test set accuracy is 10% higher than the training set accuracy. I look forward to your response.
The text was updated successfully, but these errors were encountered: