-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High CPU utilization due to large number of DELETE
statements
#1412
Comments
Hi For some reason I can't open the picture in full resolution but as far as I can tell, you get 10-20 times more calls for the Could you describe also the specifics of your deployment i.e. how many instances there is of each logical endpoint. In order to keep that data confidential, please send up an email to the support address. |
Hi, We have 9 services but some have 2 NSB endpoints Service 1 - 1 NSB endpoint with NServiceBusMaxConcurrency = 20 I can send the picture again on some email. |
Yes, the number of commits is off but so is the ration of calls to rows affected. In the ideal scenario when queues always have some messages the number of DELETE calls should be equal to number of rows returned. I am especially concerned about the Does the rows/s value of 50-80 match the expected number of messages per second these endpoints process? |
@crirusu it seems that the extensive number of We are working on the fix: |
DELETE
statements
@crirusu fyi, the fix for PostgreSQL is out in the 8.1.5 version of the package. |
Symptoms
When using PostgreSQL transport, the database used as messaging infrastructure is put under heavy load caused by an excessive number of unnecessary
DELETE
statements i.e. statements that result in0
rows being affected.Who's affected
All PostgreSQL transport users.
Root cause
The problem is caused by the peek statement overestimating the number of messages ready to be received. The query calculates the difference between "oldest" and "newest" (based on a sequence number) messages including messages that are already being received. This in turn, causes situation when a single receive transaction that takes longer than the others can cause a significant overestimation of the number of messages.
Patched version
Original content
### Describe the bugDescription
We deployed 9 net core services using NSB with Postgres which used a database on an amazon server with 48 acu on aurora postgress serverless v2.
According to Amazon 1 ACU = 2GB RAM
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.html
Even with few data to be processed CPU did not dropped, and we were unable to keep up with processing.
I am attaching a picture with a very high number of commits and rollbacks.
With the database for the queues on a MSSQL Server with 8 cores and 32 GB of RAM we can run all services with less than 30% CPU.
Versions
Please list the version of the relevant packages or applications in which the bug exists.
Steps to reproduce
Try to see how many requests are actually done on the database.
The text was updated successfully, but these errors were encountered: