-
Notifications
You must be signed in to change notification settings - Fork 162
identity: reload CA root cert channel on file change #1775
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
istio-testing
merged 10 commits into
istio:master
from
jlojosnegros:feat/root-ca-reload
Mar 26, 2026
+302
−17
Merged
Changes from all commits
Commits
Show all changes
10 commits
Select commit
Hold shift + click to select a range
92737de
RootCertManager: Add new CrlCertManager-like struct
jlojosnegros 2d253e5
caclient: rebuild channel when root cert changes
jlojosnegros f9375b0
small adaptations
jlojosnegros fcaa084
Some unit tests
jlojosnegros 664d8da
solve some compilation problems
jlojosnegros c7337e1
delete is_dirty as it is not used
jlojosnegros 56ec4db
some clippy adjustments
jlojosnegros e8b03a0
adding some comments
jlojosnegros bf05e3d
addressing comments
jlojosnegros fac3731
log write lock wait time after TLS channel rebuild
jlojosnegros File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't requesting a change per se. Sort of reasoning "out loud" about the locking here.
After we take_dirty() the state returns to clean immediately and we enter the await to attempt rebuild_channel. During this time nothing holds the write lock and the state is clean so we may try to use an old client. It might be able to compound as contentious readers may continue to delay being able to take the write, all the while in a "clean" state. I think the advantage is that we do not queue multiple write/rebuilds, which seems like a nice property. In the happier state, the old client is still valid and we keep on running nicely but eventually wind up with a fresh client. In the worse state, the client isn't usable and we produce errors while potentially blocking the resolution by causing read contention on the lock.
Assuming we held the rwlock before marking clean, a bunch of reads could await the write lock to resolve. On success, they get the newly minted client (probably ideal?). On fail, they get the old client and what happens happens. The downside is we could have multiple writelock awaits simultaneously queue since multiple calls to fetch could see the dirty state while we await any held read locks to clear. This might be kind of helpful, since at least write locks to not lead to further read contentions, but we might need to check if the dirty was resolved by someone else before actually rebuilding.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @ilrudie ,
Thanks for the comments!!!
Sorry for the late response
Yeah, you are right, but the contention will be quite limited I think.
If I remember correctly, in Linux the RW lock uses "write-preferring" policy by default, so once a writer ask for the lock it only has to wait for the readers that already has the lock to unlock it, but has preference over any new reader. Given this the contention will be limited to just those readers that asked for the lock before the writer and Read lock is only hold while doing the
clonewhich is a very fast operation (~us), so the contention time should be really smallYou are right again. Old client will be used while rebuild is done and that will fail if the old cert is not valid just after rotation, but I was assuming a healthy time window between rotation and certification expiry.
Yeah, the problem with that is we cannot hold a RWlock through an
awaitoperation, so we should changestd::async::RWLocktotokio::async::RWLockwhich is not trivial, and as you have already said, we will have to use a double-checked locking.Working from this idea I was thinking ...
What if we use a "rebuild_lock" (
tokio::sync::Mutex) to serialize channel builds, so we avoid queuing multiple builds, it will be something like:This will:
std::sync::RWLockThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the discussion. I agree we should have time to rotate out the client in most reasonable cases so this is probably a suitable implementation with a good balance of complexity.
It might be nice to add how long we waited for the write lock to the debug message. This way if we suspect contention, we already have some information that can be made available to troubleshoot. I don't think we need to immediately introduce more complexity though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea.
I have added the elapsed time in write contention to the debug log.