Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Delivery handlers can be invoked with deallocated or modified memory contents #802

Closed
Anarh2404 opened this issue Apr 3, 2020 · 9 comments
Assignees
Labels
Milestone

Comments

@Anarh2404
Copy link
Contributor

Anarh2404 commented Apr 3, 2020

In #732 was introduced pooled message body ( 8bf9308 )
But the command is not invoked immediatly, just shedulled. So when the event handler is invoked, the command object can already be disposed and its memory region zeroed out/overwritten.

Test (@michaelklishin I made it single threaded) https://github.com/Anarh2404/rabbitmq-dotnet-client/blob/unexpected-frame-repro/projects/Unit/TestFloodPublishing.cs#L89

@michaelklishin
Copy link
Member

I don't understand what the problem really is and what options do we have. @stebet do you?

@michaelklishin michaelklishin changed the title Receiving garbage instead of a message Delivery handlers can be invoked with deallocated memory contents Apr 3, 2020
@michaelklishin michaelklishin changed the title Delivery handlers can be invoked with deallocated memory contents Delivery handlers can be invoked with deallocated or modified memory contents Apr 3, 2020
@michaelklishin michaelklishin added this to the 6.0.0 milestone Apr 3, 2020
@stebet
Copy link
Contributor

stebet commented Apr 3, 2020

I'll take a look, this shouldn't happen, as we're not supposed to dispose the pooled memory until the work has been completed.

@michaelklishin
Copy link
Member

Alternatively, we can yield a copy to the delivery handler. We would lose some of the efficiency gains but safety is a very important factor for the users of this client. In fact, it remains to be seen how many would be tripped by the new copy-or-use requirements in 6.0.

@stebet
Copy link
Contributor

stebet commented Apr 3, 2020

Alternatively, we can yield a copy to the delivery handler. We would lose some of the efficiency gains but safety is a very important factor for the users of this client. In fact, it remains to be seen how many would be tripped by the new copy-or-use requirements in 6.0.

My thoughts exactly. We still shouldn't lose that much efficiency since that can be pooled as well. The overhead is just memory copies taking place, but allocations shouldn't be affected. I can make that change and submit it.

@bording
Copy link
Collaborator

bording commented Apr 3, 2020

My thoughts exactly. We still shouldn't lose that much efficiency since that can be pooled as well. The overhead is just memory copies taking place, but allocations shouldn't be affected. I can make that change and submit it.

Before making more changes, could we try to figure out why the existing code has a problem since @stebet you said you didn't think it was possible.

@stebet
Copy link
Contributor

stebet commented Apr 3, 2020

@bording See my PR with the possible fix. It's due to the deliveries being put on separate worker pools, so after enqueuing the work. See line 58 in Session.cs. After handing the command off to whomever should handle it (in this case a Consumer that later enqueues the work asynchronously), the command, along with the pooled memory it contains is disposed, and might be zeroed out/reused. The PR makes the Consumer Dispatcher copy it before dispatching, and the changes put it on the workers to dispose the new pool once they have finished processing them.

It adds memory copies and churn a bit, but should not increase allocations since they are also utilizing pooled memory.

@stebet
Copy link
Contributor

stebet commented Apr 3, 2020

Eventually, we should just always do this in an async fashion, make message consumption async by default, and deprecate anything else (it's still possible to have Eventing and non-async on top of that). Then this can be done more reliably and we can simply avoid having any special-made worker/pooling constructs. I can take a look at that soonish.

@michaelklishin
Copy link
Member

@stebet hey, we can do that as soon as 7.0 :)

@lukebakken
Copy link
Contributor

Fixed by #804. Thanks everyone!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants