Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make logging functionality truly thread-safe #397

Open
clalancette opened this issue Oct 26, 2022 · 3 comments · May be fixed by #427
Open

Make logging functionality truly thread-safe #397

clalancette opened this issue Oct 26, 2022 · 3 comments · May be fixed by #427
Labels

Comments

@clalancette
Copy link
Contributor

clalancette commented Oct 26, 2022

rcutils_logging_get_logger_level (which is called when doing any logging in ROS 2) has never been thread-safe with respect to rcutils_logging_set_logger_level.

However, prior to PR #381, rcutils_logging_get_logger_level was thread-safe with respect to itself. That is, it was only ever reading data, so it was safe for multiple threads to be reading that data at once.

PR #381 changed it so that we cached the results of calculating the logging level and stored that back in the hashmap. While that is an important optimization, it also caused rcutils_logging_get_logger_level to not be thread-safe with respect to itself. So we disabled that optimization in #393.

However, we really should re-enable that optimization and also make rcutils_logging_get_logger_level thread-safe with respect to rcutils_logging_set_logger_level. To do that, what we need to do is to add a reader/writer lock around the hashmap. That will allow the very common case of multiple threads calling rcutils_logging_get_logger_level to continue to work like today (holding a read-lock), while also making it so that the less common case of either setting the log level or updating the cache temporarily blocks readers (holding a write lock). The wrinkle here is that all of this is in C, so we need to come up with some cross-platform way to do this. It needs a bit of thought; we may be able to get away with only using atomics here.

@fujitatomoya
Copy link
Collaborator

@clalancette thank you very much for the explanation!

To do that, what we need to do is to add a reader/writer lock around the hashmap.

yeah agree, that can be the best performative way. having rwlock object in the hashmap object and issue rwlock depends on the operation to protect the object.

The wrinkle here is that all of this is in C, so we need to come up with some cross-platform way to do this. It needs a bit of thought; we may be able to get away with only using atomics here.

agree, atomic can do the work but not performative in most reading cases as you mentioned. rcutils abstraction rwlock is work though.

btw, AFAIK, mutex and rwlock, these kind of things have never been used in rcutils or rcl before, if i am not mistaken, we tried to avoid these implementation until now? so question is can we actually take this path? i would like to hear from others and history.

@jrutgeer
Copy link

The wrinkle here is that all of this is in C, so we need to come up with some cross-platform way to do this.

btw, AFAIK, mutex and rwlock, these kind of things have never been used in rcutils or rcl before, if i am not mistaken, we tried to avoid these implementation until now?

@fujitatomoya @clalancette
Could this proposal be a valid alternative to avoid having locks at the rcutils/rcl layer and keep them in rclcpp instead?

@clalancette
Copy link
Contributor Author

btw, AFAIK, mutex and rwlock, these kind of things have never been used in rcutils or rcl before, if i am not mistaken, we tried to avoid these implementation until now?

Yes, we have attempted to keep the locks out of rcutils and rcl prior to this. I think we could take that path, but I think @jrutgeer 's proposal is simpler. I'm going to comment over there as I have a couple of thoughts.

@clalancette clalancette linked a pull request Jul 13, 2023 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants