Skip to content

Conversation

@brettburley
Copy link

This prevents specific exceptions from being notifying when there are remaining retries left on a job in the Sidekiq integration. For example, you might configure with:

Raven.configure do |config|
  config.retryable_exceptions = [HTTP::TimeoutError, Errno::ECONNRESET]
end

See #781 for more details.

Any suggestions as to how I could test with the "Sidekiq full-stack integration" part of the sidekiq spec? In particular, it doesn't appear that the Raven configuration is reset between tests, and other tests don't perform a Raven.configure, so I'm not sure how bad it would be to set configuration in the spec to add a test retryable_exception.

self.project_root = detect_project_root
self.rails_activesupport_breadcrumbs = false
self.rails_report_rescued_exceptions = true
self.retryable_exceptions = []
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These changes fail two lint rules:

  1. Assignment Branch Condition size for initialize is too high, can we just increase this from 30 to ~32? Otherwise, I'm not sure if there's a better place to put this configuration.
  2. Class has too many lines, not sure if you're OK increasing this limit as well, or if we should try pulling out some of the common functionality. For example, retryable_exception? and excluded_exception? share logic using get_exception_class, which we could pull out into a util or similar.

@nateberkopec
Copy link
Contributor

I think this sort of behavior can already be accomplished with should_capture. I'll write up an example and put it in the docs.

@brettburley
Copy link
Author

@nateberkopec good point. Looking at the implementation of should_capture, it looks like it's only currently passed the exception itself, and not the additional options added to capture_exception in the Sidekiq exception handler. However, that information is tagged on Raven.extra_context[:sidekiq] -- would you recommend just reading it from there within the capture handler?

@gingerlime
Copy link

@brettburley thanks for the PR. curious how you managed to resolve this eventually? I bumped into the same problem, and want to find a way to log errors to sentry, but not every retry, but perhaps only if it fails for at least X retries... I guess with should_capture and digging into the extra sidekiq context, I can do something, but it feels a bit cumbersome for something that should evaluate quickly, and also, as @nateberkopec commented on before, shouldn't this really be part of the sidekiq integration?

@gingerlime
Copy link

@brettburley sorry for nagging, but would really appreciate if you could share some more details here.

@gingerlime
Copy link

Rollbar seems to do this with a configurable sidekiq_threshold

@krisleech
Copy link

I too think it would be preferable to have non-reporting of retried exceptions built in to the sidekiq integration.

@marcsasson
Copy link

I think this sort of behavior can already be accomplished with should_capture. I'll write up an example and put it in the docs.

@nateberkopec Can you link to where you wrote an example or can provide one?

@st0012 st0012 added this to the 3.1.0 milestone Aug 21, 2020
@seanlinsley
Copy link
Contributor

Is there a reason not to use death_handlers? It seems to do all the heavy lifting, and since 2018 has covered all job types: sidekiq/sidekiq#3980

@st0012 st0012 modified the milestones: 3.1.0, 4.0.0 Sep 17, 2020
@st0012 st0012 changed the base branch from master to v3-1-stable September 18, 2020 16:45
@st0012 st0012 modified the milestones: 4.0.0, 3.1.1 Sep 18, 2020
@st0012 st0012 changed the base branch from v3-1-stable to master September 24, 2020 10:19
@st0012 st0012 modified the milestones: 3.1.1, 3.1.2 Sep 24, 2020
@st0012
Copy link
Contributor

st0012 commented Feb 10, 2021

closed because sentry-raven has entered maintenance mode and won't receive any enhancement

@st0012 st0012 closed this Feb 10, 2021
@st0012 st0012 removed this from the 3.1.2 milestone Feb 10, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants