You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Per our conversation, jobTree sometimes loses track of where imports are coming from.
Reporting file: /hive/users/dearl/alignathon/testPSAR/jobTree_flies_reg2_swarm/jobs/tmp_IqnDTgr8uv/tmp_AzbE0HYFWt/tmp_70td82ax1K/log.txt
log.txt: Parsed arguments and set up logging
log.txt: Traceback (most recent call last):
log.txt: File "/cluster/home/dearl/sonTrace/jobTree/bin/jobTreeSlave", line 206, in main
log.txt: loadStack(command).execute(job=job, stats=stats,
log.txt: File "/cluster/home/dearl/sonTrace/jobTree/bin/jobTreeSlave", line 53, in loadStack
log.txt: _temp = __import__(moduleName, globals(), locals(), [className], -1)
log.txt: ImportError: No module named batchPsar
log.txt: Exiting the slave because of a failed job on host kkr18u44.local
log.txt: Finished running the chain of jobs on this node, we ran for a total of 5.177906 seconds
Requires explicit setting of PYTHONPATH when running on swarm, when running on kolossus simply running from the same directory as the script is sufficient.
The text was updated successfully, but these errors were encountered:
Per our conversation, jobTree sometimes loses track of where imports are coming from.
Requires explicit setting of PYTHONPATH when running on swarm, when running on kolossus simply running from the same directory as the script is sufficient.
The text was updated successfully, but these errors were encountered: