Skip to content

Conversation

@rossabaker
Copy link
Member

Motivations:

  • Spark cluster exits with code 16 when sys.exit is called. This makes "successful" jobs fail.
  • Returning main on success, rather than forcibly exiting, allows the natural shutdown for the platform to proceed. For example, this gives any non-daemon threads a chance to finish their jobs on the JVM.

@rossabaker
Copy link
Member Author

/cc @kaiserpelagic, who has been battling this with fs2.StreamApp.

IO(Logger.reportFailure(t)) *>
IO(sys.exit(ExitCode.Error.code))
case Right(0) =>
IO.unit
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

JS is always tricky. I guess at this point the keep-alive was canceled.

But yes, this is better for JS as well.

@codecov-io
Copy link

Codecov Report

Merging #252 into master will decrease coverage by 0.12%.
The diff coverage is 0%.

@@            Coverage Diff             @@
##           master     #252      +/-   ##
==========================================
- Coverage   89.57%   89.44%   -0.13%     
==========================================
  Files          57       57              
  Lines        1544     1544              
  Branches      153      149       -4     
==========================================
- Hits         1383     1381       -2     
- Misses        161      163       +2

@alexandru alexandru added this to the 1.0.0-RC2 milestone May 29, 2018
@alexandru alexandru merged commit 38c2995 into typelevel:master May 29, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants