Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reference brainmask failing for highly intensity nonuniform bold data #1000

Closed
zhifangy opened this issue Feb 21, 2018 · 7 comments
Closed
Labels

Comments

@zhifangy
Copy link
Contributor

Hi, recently I ran some data with high spatial resolution (1.6mm isotropic) collected from 64 channels head coils. The raw bold data intensity was highly nonuniform which the frontal midline regions’s intensity was only roughly 1/3 of the brightest cortical regions. When I ran these data with fmriprep-1.0.0, the brain mask generated for bold reference workflow seems OK to me. After the 1.0.1 update, the bold_reference workflow n4_mask often generated very terrible brain mask which omitted lots of midbrain regions. This problem still occurred after 1.0.7 update. (See example below). To my best knowledge, this would only influence EPI-T1 registration and EPI unwarp steps. The final registration seems OK to me, however.
screen shot 2018-02-21 at 17 51 31

After digging the bold reference workflow, I pinned the problem to n4_mask in enhance_and_skullstrip_bold workflow. I realized the recent change in 1.0.7 but the lower upper_cutoff didn’t work for my data. I tried some other upper and lower cutoff combinations with the log-transform trick but it still give me similar bad results. I suspect that the histogram based method in nilearn compute_epi_mask function may be unsuitable for data which some brain regions signal intensity mixed up with non-brain regions.

I did a little hack which replaced nilearn compute_epi_mask with fsl bet in enhance_and_skullstrip_bold_wf. It gives me a decent brain mask (a little bit loose) fed to n4_correct. All followup steps ran smoothly. The final results seems good to me.

I was wondering if there’s any other option I could try instead of hacking the workflow?

@chrisgorgo
Copy link
Contributor

Thanks for reporting. The challenge of this problem is that we need a solution that would work well of a diverse set of datasets (or a command line option to use an alternative method such as bet for edge cases).

It would be great if you could help us by:

  1. Sharing the full report generated with vanilla 1.0.7 and 1.0.7 with your modifications.
  2. Opening a pull request with your proposed changes
  3. Sharing the problematic data with is on https://OpenNeuro.org

@zhifangy
Copy link
Contributor Author

zhifangy commented Feb 22, 2018

@chrisfilo Here's the link of one participant's report. https://drive.google.com/file/d/1NiHf1sUeyj1P7-IvoS2l9T4gj8Q1n87g/view?usp=sharing

Also I upload one example participant's data on OpenNeuro.org. https://openneuro.org/datasets/ds001240

Hopefully, this will help you resolve the problem.

@chrisgorgo
Copy link
Contributor

Thanks for sharing the reports - this is very useful.

So in FMRIPREP BOLD masks are calculated multiple times, but only one of those calculations is presented in the report (the mask calculated in the BOLD space after sysceptibility distortion correction - please correct me if I am wrong @oesteban). There is work in progress to reduce number of times masks are reestimated and improve reporting (issue #963 partial solution #1002).

In your case (vanilla FMRIPREP) almost most masks depicted in the ROI plots look very good. There is only one exceptions that would need a little tweaking (mask is the red outline):
image

The problem is with the masks used for coregistration and susceptibility correction (however this seems not to have negatively impacted the resulting transformations). Those are visibly too conservative. One more thing to check are the masks that are actually saved in the outputs. Do those look ok?

As for the solution to the problem I would vote for:

  1. trying to tweak the existing skullstripping workflow to deal with the sub 26 task molencoding run 02
  2. make sure we limit the number of times we calculate the mask and reuse the good mask in all steps and outputs

WDYT?

@chrisgorgo
Copy link
Contributor

BTW - it would be still great if you could send a PR with your improvements. Maybe they could lead to a robust solution that works for everyone.

@zhifangy
Copy link
Contributor Author

The output mask is surprisingly good, even it was problematic in the report.
screenshot

For my data, the problem seems mainly focused on bold reference image masking. Though I'm a little bit confusing how the final mask was generated and why it's different with the mask in the report. I noticed the recent changes and hopefully get better a understanding after reading the code. IMO, we only need to calculate bold mask twice. First one is initial reference image mask and the second is from unwarped reference.

I will send a PR after I understand the recent changes of the workflow.

@chrisgorgo
Copy link
Contributor

chrisgorgo commented Feb 23, 2018 via email

oesteban added a commit that referenced this issue Oct 17, 2018
Refs. #1000, #1050, #1321.

Also includes a new filter of branches so builds other than tests
are skipped if the name of the branch starts with ``tests?/``.
@oesteban
Copy link
Member

Fixed via #1321

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants