-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sentry stopped accepting transaction data #2876
Comments
Did you change the port? |
Yes, I have the relay port exposed to the host network. How did you manage to fix the problem? |
When I reverted the port change the problem was resolved. |
Nope, didn't help. Doesn't work even with default config. Thanks for the tip though |
Are there any logs in your web container that can help? Are you sure you are receiving the event envelopes? You should be able to see that activity in your nginx container. |
Same here, on the browser side, there is a request sent with an event type of "transaction", but there is no data displayed under "performance", and the number of transactions in the project is also 0. |
Problem solved, server time not match the sdk time. |
Are you on a nightly version of self-hosted? What does your sentry.conf.py look like? We've added some feature flags there to support the new performance features |
I'm using docker with the latest commit from this repository. Bottom of the page says I've updated sentry.conf.py to match the most recent version from this repo - now the only difference is in After that, errors have also disappeared: |
I can confirm that the clickhouse errors are due to the Rust workers, reverting the workers part of #2831 and #2861 Workers logs show that insert is done (is it ?): |
The error is caused by connection being prematurely closed. See #2900 |
errors are not logged to |
Okay so I'm able to replicate this issue on my instance (24.3.0). What happen is that Sentry does accept transaction/errors/profiles/replays/attachments data, but it doesn't record it on the statistics. So your stats of ingested events might be displayed as is there were no events being recorded, but actually the events are there -- it's processed by Snuba and you can view it on the web UI. Can anyone reading this confirm that that's what happened on your instances as well? (I don't want to ping everybody) If the answer to that 👆🏻 is "yes", that means something (a module, container, or something) that ingest the events didn't do data insertion correctly for it to be queried as statistics. I don't know for sure whether it's the responsibility of Snuba consumers (as we moved to A few solution (well not really but I hope this would get rid of this issue) for this is, either:
|
I didn't see any errors in the Issues tab. I had to rebuild a Server Snapshot to “fix” this problem. So it wasn't just the statistics that were affected. |
Feel free to join my Discord thread: https://discord.com/channels/621778831602221064/1243470946904445008 Then do a docker compose down and docker compose up. |
I think this PR #2908 would fix this issue,I don't know why it is still not be merged for a long time. |
Reverting to old Python code is not a solution. Fix the Rust code is a good one. |
Hey everyone!
Does anyone possibly know the solution or the root cause of this problem? |
I have this problem again after updating to 24.8.0! |
---except that reverting to python snuba does not work. No more errors but still does not work. |
Same behavior here, transactions showed up partially (some data in stats area) but not in global projects pages or individual project page 😢 |
I had issues with stopped ingestion, but my issue was that I didn't have COMPOSE_PROFILES=feature-complete in my custom env |
Ah yeah, that'll do it. Without that you'll only be ingesting errors. |
This issue has been present for several months and remains unresolved. |
@hheexx I helped someone on Discord a few days ago, both regular snuba consumer and snuba rust-consumer didn't work for him. He tried upgrading their server instance to a higher spec (previously 4 cores CPU + 16 GB RAM [AWS EC2 m6a.xlarge] --> 8 cores CPU + 32 GB RAM [AWS EC2 m6a.2xlarge]). See the Discord thread here: https://discord.com/channels/621778831602221064/1286099840480182272 Obviously I know bumping their server spec is not for everyone, even my initial hunch was on the IO/s (or IOps) limit. |
thanks @aldy505, you are right. I fixed it by moving msl to seperate ssd storage (vm is on hdd) |
Same issues, nothing helps :( |
Same issue here, on fresh install and latest commit on master, I do get a 200 response with ID of the transaction, but nothing shows in the performance tab panel. |
Self-Hosted Version
24.3.0.dev0
CPU Architecture
x86_x64
Docker Version
24.0.4
Docker Compose Version
24.0.4
Steps to Reproduce
Update to the latest master
Expected Result
Everything works fine
Actual Result
Performance page shows zeros for the time period since the update and until now:
Project page shows the correct info about transactions and errors:
Stats page shows 49k transactions of which 49k are dropped:
Same for errors:
Event ID
No response
UPD
there are a lot of errors in clickhouse container:
The text was updated successfully, but these errors were encountered: