-
-
Notifications
You must be signed in to change notification settings - Fork 501
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
async
option will be dropped.
#1522
Comments
Long time user of sentry-ruby and sentry-raven. I didn't know this feature existed until a current migration of a large rails app to sentry. I noticed it while reading through the docs and attempted to set it up but immediately ran into issues and elected to defer in order to simplify the migration. In addition, we have a large amount of complex queues. Injecting this option into our system would likely require a bit of thought so as not to bowl over important queues while keeping error reporting timely. I suppose this is a vote to drop it? |
Hey! We currently use the I'm curious about that part:
What would happen if there is a spike in events, let's say from a noisy error filling the queue. Would another error happening at the same time be silently dropped because the queue is full? Or am I misunderstanding how it works? I'd like to switch away from
|
Background worker was added since v4.1.0. So it's a new thing for most users I think 🙂
As of now, the queue doesn't distinguish events. So if there's a spike of a particular error, other errors may not make it into the queue. But personally I'm not worry about this because:
If there's a spike big enough to overflow the SDK's queue and drop some events, it'll probably overflow your background job queue with the |
At my job, we are working fine with the Maybe I can't understand the problem because using sentry only for exceptions without APM. |
There are 2 main cost of having this option around:
sentry-ruby/sentry-ruby/lib/sentry/client.rb Lines 119 to 134 in 42455c8
sentry-ruby/sentry-rails/app/jobs/sentry/send_event_job.rb Lines 1 to 33 in 42455c8
If the upside is high, these cost wouldn't be an issue. That's why we have had it for many years. But since we already have a better solution for the problem (background worker) that has much less downside, I don't think it's worth it now. |
See getsentry/sentry-ruby#1522 for details and why we want to do this change. We suspect this also may be a cause of https://app.shortcut.com/simpledotorg/story/5282/redis-down-in-production
…rker (#2951) * Drop sentry async - use Sentry's builtin background worker See getsentry/sentry-ruby#1522 for details and why we want to do this change. We suspect this also may be a cause of https://app.shortcut.com/simpledotorg/story/5282/redis-down-in-production * Explicitly turn off Sentry tracing We are using Datadog for tracing currently, and having two tools capturing trace info is not necessary and may add some overhead we don't want.. * Revert "Explicitly turn off Sentry tracing" This reverts commit e1eae44.
This issue has gone three weeks without activity. In another week, I will close it. But! If you comment or otherwise update it, I will reset the clock, and if you label it "A weed is but an unloved flower." ― Ella Wheeler Wilcox 🥀 |
We were using a custom sidekiq task in an
And it was a nightmare. We just switched to default by removing |
I think removing |
@trevorturk Thanks for the feedback. The current documents on Sentry and RDoc both suggests that it's a deprecated option and points user to this issue for more info 😄
Yes you should be able to delete it directly without any issue. But if you do, please report it here and I'll fix it ASAP 😉 |
It all worked fine when I just deleted it, thank you! |
Can you elaborate on why this is and why this wouldn't be in https://github.com/getsentry/sentry-ruby/blob/efcf170b5f6dd65c3b047825bddd8fde87fc6b7b/sentry-ruby/lib/sentry/rake.rb? |
@dillonwelch sorry that I forgot to update the description. that issue has been addressed in #1617 and don't exist anymore 🙂 |
For instrumenting AWS Lambda, it seems like it'd be ideal to use Lambda Extensions to deliver error reports. At first glance it looks like the |
@sl0thentr0py thanks 👍 I've commented over in that issue. |
@st0012 how Lambda works is AWS spins up an instance of some code you've written and invokes "handler" method with an "event" payload (e.g. a web request). And then:
https://docs.aws.amazon.com/lambda/latest/dg/runtimes-context.html So you have some code that runs and then gets "frozen" and then maybe gets "unfrozen" again in the future to handle future events (e.g. web requests). But if you deploy new code, or if there is too long of delay between events then Lambda discards the frozen execution environment. So if you have some async code you may end up in this situation:
And then your async code may never run if Lambda ends up discarding the frozen execution environment. And Sentry events would get lost. So currently the way to make Lambda work reliably with Sentry would be to make Sentry operate synchronously. But then you have Sentry possibly slowing down your application code and/or potentially affecting it if there were Sentry bugs/outages etc (even though I'm sure that would never happen! 😉). Which I assume is one reason why Sentry runs asynchronously in the first place. Last year AWS released a solution to this issue called "Lambda Extensions". You can use Lambda extensions to allow a Lambda function to handle application code synchronously while also enqueuing asynchronous events to "extensions" (e.g. a Sentry extension) which don't block the main application code. A configuration option like the |
I agree dropping the async option. I'm developing rails app that uses config.async with Sidekiq, but I will disable this config and use background worker or send synchronously. I was originally aware of this issue and was considering disabling I plan to disable It would be more reassuring to be able to check the queue overflow. It doesn't look bad approach sending errors synchronously too. |
@hogelog this is on the product road map, but I can't give you an ETA on when it will be shipped yet. But we certainly want to expose these statistics to the user eventually. |
@hogelog I should also mention that if you have tracing enabled, |
I'm looking forward to it!
I’m not using tracing, so I was not aware. Considering the future, it may not be good to send events directly. thanks! |
async
option will be dropped.
I greatly appreciate this extra warning message! It's a good warning. Can you tell me exactly what I should do? Should I simply remove this code from my codebase, or do I need to add something else?
|
@ariccio You can simply delete it 😉 |
We at my company have been plagued by the async feature for 1-2 years now; hitting the limit of the payload and the rate limit made us scratch our heads on how to safeguard against those (before Sentry internally remedied them). How we've caught these issues? By seeing Sidekiq being brought to its knees and important messages not processed. Right now, we have decided to end the |
This feature will be removed in a future version of Sentry, and it might be a source of some Redis memory problems (which have been and maybe still are a problem for P&E Dashboard). See getsentry/sentry-ruby#1522
async has been deprecated for long enough and is removed in the next 6.0 major branch so we can close this placeholder issue now. |
The
async
option is unique to the Ruby SDK. It was designed to help send events asynchronously through different backends (e.g. a Ruby thread, Sidekiq worker...etc.). Depends on the backend, it can pose thread to the system due to its additional extra memory consumption. So it's an option with some trade-offs.But since version
4.1
, the SDK now has its own background worker managed (implemented with the famousconcurrent-ruby
library). It can handle most of the asyncThe
async
Option ApproachPros
Users can customize their event sending logic. But generally it's just a worker with
Sentry.send_event(event, hint)
.Cons
The Background Worker
concurrent-ruby
).Pros
Cons
Unsent events will die with the process. Generally speaking, the queue time in background worker is very low. And the chance of missing events due to this reason is small in web apps. But for script programs, the process often leaves before the worker is able to send the event. This is whyhint: { background: false }
is required in rake integrations.However, I don't think this problem can be solved with theasync
option.This drawback has been addressed in #1617.
Missing Events During A Spike Because of Queue Limit
I know many users have concern about the background worker's 30 events queue limit will make them lose events during a spike. But as the maintainer and a user of this SDK, I don't worry about it because:
async
approach with a sidekiq/resque...etc. worker due to the reason I described in the issue.If there's a spike big enough to overflow the SDK's queue and drop some events, it'll probably overflow your background job queue with the
async
option too and/or pose a greater damage to your system.What I'm trying to say is, it's not possible to expect Sentry to accept "all events" during a big spike regardless which approach you use. But when a spike happens,
async
is more likely to become another bottleneck and/or cause other problems in your system.My Opinion
The
async
option seems redundant now and it could sometimes cause more harm. So I think we should drop it in version5.0
.Questions
The above analysis is only based on my personal usage of the SDK and a few cases I helped debug with. So if you're willing to share your experience, I'd like to knowEven though the decision has been made, we still would like to hear feedback about it:
async
option in your apps?background_worker_threads
config option?The text was updated successfully, but these errors were encountered: