-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ErrorException: Caught ErrorException (HTTP/1.0 500 Internal Server Error): fwrite(): write of X bytes failed with errno=32 Broken pipe #1634
Comments
I suspect the 500 internal server error part might be from I guess it does that whenever it catches an exception. You could try to wrap the StreamHandler inside a WhatFailureGroupHandler to suppress these errors if stdout/stderr cannot be written to for some reason. I am not sure why it breaks though, broken pipe could be because the stream isn't open anymore when it tries to write. If Lumen is processing more than one request in one process, you could also try to call $handler->close() on those StreamHandler between requests to make sure the stream gets reopened every time? |
@Seldaek
This function can be executed only one time. After that it throws exceptions like that:
The stream is the same as in the issue header. |
@Atantares do you have any details on the config? What's the StreamHandler configured to log to? Overall sharing the whole config would maybe help. |
I ran into this issue this week upon upgrading Monolog from 2.9 to 3.5, which is part of upgrading Lumen from 8 to 10. This issue only showed up on production, not on local dev environment, nor on staging. In all environments the log channel is configured to go to stderr What's also most frightening is that I got the same error @ehginanjar and @Atantares returned in the endpoint's response to end users with a 200 http status, and wasn't logged in the application's log, apparently because the stderr stream pipe got broken, and fwrite couldn't write anything. |
@MMSs I encountered the same thing with laravel 10, do you found any solution? |
Unfortunately not yet @xuandung38. I upgraded from php 8.0 to 8.2 along with Laravel from 8 to 10, which consecutively upgraded monolog library from 2.9 to 3.5. And I tried a couple of solutions to fix the issue, but none of them worked:
|
I've run into this issue as well as of this week. We deployed to production PHP 8.1 upgrade to a Laravel 8 project and immediately ran into this error. This error did not appear at all in local or staging environments. |
Sorry but until someone gets to the bottom of this, I doubt I can do much. I'm not sure why this seems to be an issue with Laravel and not other projects. |
This started happening to us after upgrading to PHP 8.1, bumped Monolog to 2.9.2 and stayed on Lumen 8. I'm at a loss though to figure out what suddenly triggered the issue. |
Bumped Monolog to 2.9.2 from which version? If you can try to downgrade it again and that fixes it at least it'd give us a range of changes to look through for a regression.. Because 2.x to 3.x like the other reports above is quite a large range. |
We were on 2.9.1. I'll try downgrading and see if I can replicate the issue |
Ok, that might be due to this change which would have made it swallow errors in some cases before? 2.9.1...2.9.2#diff-a1ddc5c4ead6773b8670f9be5007cbe0239638aedbcf12166acf075a9b8742a2 see also #1815 |
Seems unlikely to me that this is it but who knows. Anyway would be good to know if it's 2.9.2 which caused it or you upgrade to PHP 8.1 |
I'll downgrade and test it out and let you know. Does this mean that potentially that on 2.9.1 we were missing some logs that were being swallowed? |
I'm not entirely sure but it might be if that's indeed what caused it.. My gut feeling says no tho |
What I found in debugging this is that monolog/src/Monolog/Handler/StreamHandler.php Line 118 in 479c936
monolog/src/Monolog/Handler/StreamHandler.php Line 127 in 479c936
is_resource($this->stream) is true, fwrite still fails to write to it.
I don't know what causes the pipe to break only in production, or why only with the latest version of illuminate/log, but my suspicion is that in production multiple php-fpm processes are constantly trying to open new handlers, leading the system to overload and break previously opened ones. So the solution that we went with for now is replacing $app->extend(\Psr\Log\LoggerInterface::class, function ($logger) {
$logger->popHandler();
$logger->pushHandler(new \Monolog\Handler\ErrorLogHandler());
return $logger;
}); |
That is likely what can happen, to prove it either way you can try playing around with sysctl and pipe/open files limits: sysctl -w fs.file-max=xxx If the library does not recycle the stream handlers at large scale the process can locally run out of the open files limit easily. OS can close the pipe at any time really (even if its local) due to the kernel limits. |
We downgraded our package to 2.9.1 which is what it was at before we did the PHP 8.1 upgrade and we're still seeing this error for some reason. Could it be tied to something that changed in 8.1? |
Could be.. but only way to be sure is for you to revert that I guess.. not sure how easy it'd be for you. I don't see anything that seems relevant in the 8.1 changelog but who knows.. small things are sometimes not mentioned. |
We did roll back our PHP 8.1 upgrade and went back to 8.0 along with Monolog 2.9.1 and the issue stopped occurring. |
Damn.. ok, this just got way more interesting. Without clear repro case though this is gonna be hard to report to the php project. |
What stream URL do you use btw? php://stdout or a file? What is it writing into? |
We stream it via stout and stderr which then goes out to Datadog. |
it's weird on local using laragon domino.test/endpoint i get that error but if through php artisan serve http://127.0.0.1:8000/endpoint the error does not occur |
Same issue and same use case. |
Monolog version 2
Got these lot of errors when sudden burst traffic hit the service. Even though the request end up success with HTTP 200, unfortunately this make additional at least ~2s latency for every processed requests thus slowing down the service.
Stacktrace:
I use stdout and stderr for channels. It's then collected by filebeat then sent to ES. This is standard EFK stack in kubernetes cluster.
I thought that the disk is full, but I was wrong. The disk is still far from full. Then I suspect two things:
I was wondering why it says HTTP 500 internal server error? It's from php error message right? 🤔
I just think this kind of monolog error doesn't affect the application.
Wondering if anyone face the same issue.
Any help is appreciated.
The text was updated successfully, but these errors were encountered: