-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cached permissions on multiple instances running on Octane / Kubernetes #2575
Comments
I've not yet worked with Octane for a production app, so my "testing" of things like this has been limited. But would like to dig deeper. Do you mind providing a demo app, replicating the features and symptoms you describe, which we can use for exploring and debugging further? |
@drbyte, as the current code base relies on many services (realtime events, after role switch), it would be easier to create a simple example with a single controller and a few endpoints to change a role. I think the best way to test & debug would be an example which runs on a local micrk8s instance. But I would need some time for the example. |
that removes the cache in each request and create a new one again, if you are going to do that it is better to use laravel-permission/config/permission.php Line 184 in 3183837
What do you use for the cache? redis? |
I am using redis. I tried a different approach, since I don't want to clear the entire cache: This also works. @drbyte, fyi. |
Make a PR laravel-permission/src/PermissionServiceProvider.php Lines 108 to 111 in 3183837
|
I'm not sure it's safe to hard-code any resets to But is the call to |
I agree with the call to |
@drbyte, we are using some custom permission cache on the user - this part can be ignored. Most important seems to be, that the wildcard cache is beeing deleted. |
Okay. I think what we'll do is, in this package add the reset for team ID and wildcard permissions. And then for the parts you need in your own app, you can register a listener on Octane's |
Hmmm ... I forgot: we already have the reset for team ID since 6.1.0 |
So ... @erikn69 @parallels999 I'm curious your opinion on whether we should add an Octane-specific reset for laravel-permission/src/PermissionRegistrar.php Lines 168 to 176 in 3183837
Noting: laravel-permission/src/PermissionRegistrar.php Lines 137 to 157 in 3183837
|
public function clearPermissionsCollection(): void
{
$this->permissions = null;
$this->wildcardPermissionsIndex = [];
} |
Describe the bug
Running on Octane and multiple Kubernetes pods, the permissions seem to be cached on each pod. So basically a simple can check for
viewAny
will result in a 403 or a result (depending on which pod the request hits).We are using wildcard permissions.
Versions
PHP version: 8.2
Database version: mysql:8.2.0
To Reproduce
Here is my example code and/or tests showing the problem in my app:
Controller:
Policy:
Even though
register_octane_reset_listener
set totrue
the permissions seem to be cached somehow. After checking the code, I saw that you call$event->sandbox->make(PermissionRegistrar::class)->clearPermissionsCollection();
if the flag is set totrue
.I made my own Event Listener for the OperationTerminated-Event which looks like this:
I noticed that
$event->sandbox->make(PermissionRegistrar::class)->forgetCachedPermissions();
also seems to clear the wildcard permissions, which we are using. CallingforgetCachedPermissions()
seems to do the trick, as I am not able to reproduce the behaviour anymore.If
enable_wildcard_permission
is set totrue
shouldn't theforgetWildcardPermissionIndex()
also be called within theregisterOctaneListener()
?I think the comment in the permissions.php
NOTE: This should not be needed in most cases, but an Octane/Vapor combination benefited from it.
might be missleading. I think every multi instance setup running on octane might be effected.Expected behavior
The permission cache should be cleared after each request including wildcard permissions.
Environment (please complete the following information, because it helps us investigate better):
The text was updated successfully, but these errors were encountered: