Skip to content
This repository has been archived by the owner on Jan 10, 2019. It is now read-only.

Fix Async problem for exponential retry, clean up tests, and fix forking #105

Open
wants to merge 18 commits into
base: master
Choose a base branch
from

Conversation

mjsteger
Copy link
Contributor

Fixes problem noted in #100, and adds a test for the same. Cleans up the tests and splits them, allowing all the unit tests to be run without credentials(this is for circleci/local testing). Finally, fixing forking by removing the logging directly from the signal handlers, see here for more info on the issue. This was to allow more consistent running of the unit tests.

N.B. that the large number of additions/removals due to moving around/chunking the integration tests, as I was trying out a solution of running those on circle as well (but could not, likely due to some assumptions in the code that cause them to not run in parallel correctly)

mjsteger and others added 18 commits October 5, 2015 07:20
Before we were getting trap errors due to trying to write to
log(involves a mutex) in a signal handler. This is very much so not
allowed in ruby >= 2.0.
Test no longer applies; WorkflowWorkers will pretty instantly return
after SIGINT, so there's no need to test that they hit the "bail out on
second SIGINT". Came to this conclusion after trying to test said
behavior for some time. If there's any way to trigger that behavior, I'd
love to hear it.
People might be using poll_and_process_single_task directly, and it'd
suck to break their use
@pdrakeweb
Copy link

In testing this PR, I was unable to use error_handler based asynchronous activity error handling to capture the error raised by asynchronous activities that use the exponential retry mechanism. Is that expected, or perhaps I was going about it improperly?

@mjsteger
Copy link
Contributor Author

Uh, I'm not sure precisely what you mean. Could you provide an example? It's not expected that you cannot use the error_handler to handle errors in async activities with exponential retry, though it should be noted that they will get retried before they throw out.

@pdrakeweb
Copy link

I was finding that rather than the error_handler rescue block capturing the raised failure, it was causing the workflow to fail. It may be that I was doing something wrong, but I will try to reproduce the problem with a simple workflow and a couple activities that I can share.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants