Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ICA does not converge #19

Open
larsoner opened this issue Oct 10, 2017 · 8 comments
Open

ICA does not converge #19

larsoner opened this issue Oct 10, 2017 · 8 comments

Comments

@larsoner
Copy link
Member

For 3 of the runs across subjects, I get:

/usr/lib/python2.7/dist-packages/sklearn/decomposition/fastica_.py:116: UserWarning: FastICA did not converge. Consider increasing tolerance or the maximum number of iterations.
  warnings.warn('FastICA did not converge. Consider increasing '

Should we increase the number of iterations?

Also, any reason we don't concatenate all raw instances and then run ICA? This would also simplify Epochs creation because we could concatenate raw and create Epochs directly, possibly without ever preloading the data.

@jasmainak
Copy link
Member

You are saying it's possible to run ICA without preloading? I think it was mainly memory considerations. We fit a separate ICA for every run. But in any case, we realized that the ECG component is not always detected robustly across subjects, even if it is present (See the ICA results from plot_analysis_xx.py). So, perhaps we could de-emphasise this aspect of the analysis or remove it altogether since it doesn't affect evoked responses so much.

@larsoner
Copy link
Member Author

You are saying it's possible to run ICA without preloading?

Yep

But in any case, we realized that the ECG component is not always detected robustly across subjects, even if it is present (See the ICA results from plot_analysis_xx.py).

By concatenating runs we will get 6x the number of samples, so hopefully it will work better. I've made the change in the ICA script and will modify the other ones to follow suit.

So, perhaps we could de-emphasise this aspect of the analysis or remove it altogether since it doesn't affect evoked responses so much.

In theory we could. But it's good to show people that preprocessing can be done easily, and (hopefully!) that's in effective

@jasmainak
Copy link
Member

Yep

okay I will look at your script. I think we had this issue that to change the channel type, you needed to preload into memory ...

@larsoner
Copy link
Member Author

I've made the change and am running now.

I think we had this issue that to change the channel type, you needed to preload into memory

This should be done in just info so if it's true it's a bug

@larsoner
Copy link
Member Author

Actually you don't have to have the raw instances preloaded, but it still consumes a decent amount of memory doing the ICA fit (about 4GB). Is this okay? All of them look to be converging so it seems like a better solution.

@jasmainak
Copy link
Member

4 GB sounds fine to me. The 05-make_epochs script takes more memory. Did you see my other PR?

Here are the results:

usage

@larsoner
Copy link
Member Author

In theory we don't ever need to hold all epochs in memory. How does autocorrect handle that? Will it do a ton of rereading of data?

@agramfort
Copy link
Member

agramfort commented Oct 10, 2017 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants