You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 10, 2020. It is now read-only.
This needs to take account of how the first task was run & processed e.g. if first task was run locally, run the next task locally rather than enqueuing to Resque, and vice versa.
The text was updated successfully, but these errors were encountered:
Generic dependent enqueue functionality seems to require a substantial refactor.
Instead, for harvesting:
Mirror
A copy_on_success key in the harvest params can be set to true or false. This will enqueue a Mirror job if the harvest is successful using the copy_base_path key in the worker settings. The mirror job will then be processed only by the workers that are set to monitor that queue. This is tightly coupled, but should do for now.
This has already been implemented, see code and discussion for #11 and links above.
Analysis
The analysis after harvesting is not currently implemented. The analysis requires a whole lot more information. This could be provided in a two ways I can think of right now:
with the harvest params (which makes that params object rather large)
default settings could be stored somewhere and used in the same way as the mirror job (i.e. analyse_on_success, with a setting for the worker that points to default settings and config files).
Last time I tested mirror it didn't work - sorry should have filed a bug report.
As for the system analysis: At the moment it is enqueued manually and we're able to provide extra configuration and such. We need to move off this to scale. The obvious solution here is that system jobs should have their command line and config stored in the scripts table just like all other jobs. Once this happens then:
get script object from API for system job (the relevant script name/id could be specified in config file? I'm thinking name is better... then query is get latest script object for system-index-calculation type job)
merge info and enqueue
I'm sure there's a lot more going on here but that's all I got for now
Just had a thought here. Perhaps it would make more sense for baw-server to enqueue default analyses. It can hook into the changes much easier (e.g. on POST, if status changed to ready, enqueue jobs...) and it by definition has all information needed to set up new jobs. Thoughts?
This needs to take account of how the first task was run & processed e.g. if first task was run locally, run the next task locally rather than enqueuing to Resque, and vice versa.
The text was updated successfully, but these errors were encountered: