You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The value of 'dt' as 15 and 'T 'as 60 at TABLE I for the data set DVS128 Gesture, and found the testing accuracy 89.9305% in the epoch 138, where your work in the paper says it has an efficiency of 96.53%. So any modification in the code is there that needs to be done to improve the testing accuracy to the claimed value.
The text was updated successfully, but these errors were encountered:
I tested all A-SNNs for this paper, and my results are consistent with those in the paper. If you are achieving accuracy around 90%-92%, it's because you are running the training on a vanilla SNN without incorporating attention. To enable attention, you should specify the type of attention you want to use in the Config.py file of each dataset under the self.attention = "no" hyperparameter. You can set this to CA, TA, SA, CSA, TCA, TSA, TCSA, or no. The results shown in the paper use dt = 15 and T = 60 for the DVS128 Gesture dataset. The only difference in my results was that CSA gave the best test accuracy, with 96.32%. These are all GPU results however, cpu should is the same too, just longer training time.
The value of 'dt' as 15 and 'T 'as 60 at TABLE I for the data set DVS128 Gesture, and found the testing accuracy 89.9305% in the epoch 138, where your work in the paper says it has an efficiency of 96.53%. So any modification in the code is there that needs to be done to improve the testing accuracy to the claimed value.
The text was updated successfully, but these errors were encountered: