You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello. I am trying to implement GLM-HMM on our mouse & human behavior datasets, where they are required to respond to changes in the speed of a noisy drifting grating stimulus. We have four behavioral outcomes in total: FAs (licking before change happens), hits, misses, and aborts, and therefore we are trying to run GLM with ssm's categorical observation models.
While I am familiarizing myself with your repo, I get a little confused with this part of your GLM. Here, you add a vector of zero values to a single weight vector that you created. I believe you implemented in this way since you are always comparing one choice with the other when it comes to 2FAC tasks. However, do you think it is meaningful to create weight vectors independently for 0 and 1? For example, I can initialize the weight vectors like below:
self.Wk = npr.randn(1, C , M + 1)
and use this Wk with an offset term to calculate a dot product of Wk and inputs, without adding any zero value vectors.
Interestingly, this gave us results that look very similar to your plots (shown below). The right plot is generated with your original script, and the left plot is created using my modified version of the script. The blue line indicates weights for the right choices, and the orange line indicates weights for the left choices. If you calculate the difference between blue and orange lines, the values are identical to the values shown on the right side (which makes sense, because in order to create the right plot we assigned the weights for the left choices to be all zero).
Do you have a specific reason why you implemented the code to be this way? (maybe efficient from the perspective of the computation speed?) We are also quite curious whether this is also the case with our categorical observation with more than two categories. If this is not the case with multinominal observations, we have to carefully think about what outcomes to be a reference.
Thanks!
The text was updated successfully, but these errors were encountered:
Hello. I am trying to implement GLM-HMM on our mouse & human behavior datasets, where they are required to respond to changes in the speed of a noisy drifting grating stimulus. We have four behavioral outcomes in total: FAs (licking before change happens), hits, misses, and aborts, and therefore we are trying to run GLM with
ssm
's categorical observation models.While I am familiarizing myself with your repo, I get a little confused with this part of your GLM. Here, you add a vector of zero values to a single weight vector that you created. I believe you implemented in this way since you are always comparing one choice with the other when it comes to 2FAC tasks. However, do you think it is meaningful to create weight vectors independently for
0
and1
? For example, I can initialize the weight vectors like below:and use this
Wk
with an offset term to calculate a dot product ofWk
and inputs, without adding any zero value vectors.Interestingly, this gave us results that look very similar to your plots (shown below). The right plot is generated with your original script, and the left plot is created using my modified version of the script. The blue line indicates weights for the right choices, and the orange line indicates weights for the left choices. If you calculate the difference between blue and orange lines, the values are identical to the values shown on the right side (which makes sense, because in order to create the right plot we assigned the weights for the left choices to be all zero).
Do you have a specific reason why you implemented the code to be this way? (maybe efficient from the perspective of the computation speed?) We are also quite curious whether this is also the case with our categorical observation with more than two categories. If this is not the case with multinominal observations, we have to carefully think about what outcomes to be a reference.
Thanks!
The text was updated successfully, but these errors were encountered: