-
Notifications
You must be signed in to change notification settings - Fork 642
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[4.x]: Internal server error – Could not acquire a mutex lock for the queue #13052
Comments
The mutex error means that Craft thinks another request is currently processing the queue, and is avoiding having two conflicting processes. It’s possible that whatever request initially acquired the queue mutex lock encountered a fatal error that prevented the lock from being released. You should see something mentioned in By default, mutex locks live in As for the huge amount of queue jobs: “Generating image transform” jobs will be added each time an image transform (either on the front end or within the control panel for asset thumbnails) is requested and Craft doesn’t have a record of the transform existing yet. The job itself will verify that the transform really doesn’t exist (e.g. if it was just a missing index record, or if the transform has been explicitly requested by the browser via an You can disable all this functionality entirely and have Craft just generate transforms immediately when it knows it needs them rather than defering to an |
@brandonkelly Thanks for the explanation. After several restarts, clearing caches and the mutex folder and manually truncating the
I thought so too, but the Control Panel was still showing the error even after clearing the mutex folder. So something (maybe the queue runner) was repeatedly acquiring mutex locks and blocking the Control Panel. Why does the CP dashboard need a mutex lock on the
Yeah, we're used to seeing thousands of queue jobs because of this, that's perfectly normal. We generate tons of image variants for responsive images with However, I think there's a bug somewhere that can cause I can't reproduce it now, so I'll close this issue, but I'm still not sure if all those queue jobs I was seeing were normal. If I observer the same issue again, I'll report back! |
The queue tries to acquire a mutex lock when it’s making a change to it, to avoid race conditions with other requests that may also be doing something to the table. That said, in this case the error is occurring during a routine cleanup operation, which isn’t critically important to the main task (getting info about the active queue job). So I’ve just gone through and relaxed the mutex handling a bit, to skip the cleanup operation if a mutex lock can’t be immediately acquired. |
Good to hear!
If you clear your asset indexing data (Utilities → Caches → Asset indexing data), then Craft will need to rebuild those over time. Maybe that’s what happened? |
@brandonkelly Sounds great, thanks!
Ooooh … that could be it. We're running I wasn't aware this was happening, I thought image transformations would just be delegated to cron jobs if the file doesn't exist on disk. Maybe writing the asset index should happen on the fly if the file already exists on disk? Not sure about the implications of that though. Of course we could just skip the asset indexes during deployments, but I really want to clear all caches to avoid potential errors. Shouldn't the asset transform index be more "permanent" than stuff like template caches? |
Queue job, not cron job, but yes.
I could have sworn we were already doing that, but turns out we were only considering the opposite scenario – if a transform index already existed with a record of the generated transform, we were double-checking that it really did exist: cms/src/imagetransforms/ImageTransformer.php Lines 89 to 93 in 5d1db27
Just updated that logic for the next release, to verify that (Note this feature is only available for local filesystems where the |
*queue job, right 👀
That looks perfect, thanks! I was worried we'd have to adjust our workflows for this, so this change is a great improvement. Only creating queue jobs when actual image transforms need to happen is a good solution. |
@brandonkelly Will a setup with the |
@thupsi As I mentioned, the change only applies to Local filesystems. Verifying files exist on remove filesystems is slower, so not something we’d want to do outside of an indexing/transform generation operation. |
@brandonkelly Right, I overlooked the fact that it's the original file that gets cached locally, not the transformed image. So, to get any benefit from this improvement, if I have my files on a remote volume, I would need to define a local volume for my transforms. |
@thupsi Correct |
What happened?
Description
We've started seeing an issue on one site where the Control Panel is not accessible at all, it just shows an internal server error. The logs show an error related to the queue:
Full stack trace below. This came out of nowhere, and I have no idea where to start. This happens only in the live environment. It's a VPS controlled by Laravel Forge. There's only one queue runner, which is a simple Laravel
Daemon
(which is just a wrapper around Supervisor). Even stopping the queue runner completely does not remove this error. We're using the default File Cache.I have tried to manually release all queue jobs (
php craft queue/release all
), but this returns the same error as mentioned above.I tried clearing all caches, manually clearing the
storage/runtime/mutex
folder, and even restarting the server, but the error is still occurring. I have no idea what caused this or how to reproduce it – we're using the same setup for multiple sites without issues. The site has been working fine until yesterday. The only thing I can think of – there were tons of queue jobs for pending image transforms, probably around 50,000 or so. Should I manually truncate the queue jobs table in the database? But even this might only fix the issue temporarily.*Edit: I've tried to truncate the
queue
table. This works, but I've noticed that it fills right back up again. For some reason, something is creating hundreds of queue jobs per second. After a couple of seconds, there are already thousands of entries, all trying to create image transforms:This happens even while the system is offline, so now requests can be hitting the frontend, and the queue runner is deactivated. Something is stuck in a loop and is causing those queue jobs to be generated? Any ideas what this might be?
Steps to reproduce
Not sure.
Stacktrace
Craft CMS version
4.4.5
PHP version
8.2
Operating system and version
Ubuntu 20.04.4 LTS
Database type and version
mysql Ver 8.0.32-0ubuntu0.20.04.2 for Linux on x86_64 ((Ubuntu))
Image driver and version
No response
Installed plugins and versions
The text was updated successfully, but these errors were encountered: