Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create GLM weight vectors for individual outcomes #9

Open
sumiya-kuroda opened this issue Jan 9, 2024 · 0 comments
Open

Create GLM weight vectors for individual outcomes #9

sumiya-kuroda opened this issue Jan 9, 2024 · 0 comments

Comments

@sumiya-kuroda
Copy link

Hello. I am trying to implement GLM-HMM on our mouse & human behavior datasets, where they are required to respond to changes in the speed of a noisy drifting grating stimulus. We have four behavioral outcomes in total: FAs (licking before change happens), hits, misses, and aborts, and therefore we are trying to run GLM with ssm's categorical observation models.

While I am familiarizing myself with your repo, I get a little confused with this part of your GLM. Here, you add a vector of zero values to a single weight vector that you created. I believe you implemented in this way since you are always comparing one choice with the other when it comes to 2FAC tasks. However, do you think it is meaningful to create weight vectors independently for 0 and 1? For example, I can initialize the weight vectors like below:

self.Wk = npr.randn(1, C , M + 1)

and use this Wk with an offset term to calculate a dot product of Wk and inputs, without adding any zero value vectors.

Interestingly, this gave us results that look very similar to your plots (shown below). The right plot is generated with your original script, and the left plot is created using my modified version of the script. The blue line indicates weights for the right choices, and the orange line indicates weights for the left choices. If you calculate the difference between blue and orange lines, the values are identical to the values shown on the right side (which makes sense, because in order to create the right plot we assigned the weights for the left choices to be all zero).
GLMcomparision

Do you have a specific reason why you implemented the code to be this way? (maybe efficient from the perspective of the computation speed?) We are also quite curious whether this is also the case with our categorical observation with more than two categories. If this is not the case with multinominal observations, we have to carefully think about what outcomes to be a reference.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant