You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As reported in the neurostars post, I am trying to run bedpostx_parallel workflow in old (?) nipype. It works fine during all of my test runs on downsampled data (and even some previous runs on full res data).
However, when I recently tried to run it on the full res data, it failed at the step where it tried to merge all the outputs from parallel nodes into single files.
I looked into the issue, and it seems that on this line:
I could just be an anomaly due to the heuristic nature of the algorithm, and it would probably not be an issue on a subsequent run. But I still think that it could be handled within nipype.
Here's the error traceback:
The text was updated successfully, but these errors were encountered:
I initially posted on neurostars about this: https://neurostars.org/t/nipype-merging-outputs-from-bedpostx-parallel-fails/31733/1. But I think I have a quick fix, so reporting it here. Will also do PR.
As reported in the neurostars post, I am trying to run bedpostx_parallel workflow in old (?) nipype. It works fine during all of my test runs on downsampled data (and even some previous runs on full res data).
However, when I recently tried to run it on the full res data, it failed at the step where it tried to merge all the outputs from parallel nodes into single files.
I looked into the issue, and it seems that on this line:
nipype/nipype/algorithms/misc.py
Line 1465 in bc456dd
the output of
np.squeeze
is a 0D array, and getting the length of 0D array in a subsequent line isn’t possible.nipype/nipype/algorithms/misc.py
Line 1467 in bc456dd
I could just be an anomaly due to the heuristic nature of the algorithm, and it would probably not be an issue on a subsequent run. But I still think that it could be handled within nipype.
Here's the error traceback:
The text was updated successfully, but these errors were encountered: