You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
intermittently, and only in prod (so far) am I getting pods crashing from a "fatal error: concurrent map writes". I can't be 100% sure, but it sure does look like pop is always right there beside the fatal errors and it feels like there's a connection. Yeah, I know, "feels like" isn't the best way to describe the problem but I'm not sure what else might be going on here. I've searched my code for any global maps and there just aren't any. This service is very straight forward - it's an API CRUD service that does a bunch of DB calls, does some biz logic, maybe does some more DB calls, and exits the HTTP transaction.
Update: we resolved (i.e. hacked) this by surrounding the call to get a connection (get conn details, pop.NewConnection()) with a mutex
Steps to Reproduce the Problem
Attached are the series of log lines - no other log lines from my app are sent to stdout between these (and these are all on the same k8s pod). log.txt
Also attached is the database.yml file with obvious secret stuff scrubbed: database.yml.txt
Expected Behavior
I have been seeing this behavior for a couple months now - I don't remember exactly when it started, but it was during a significant ramp up for our services where we were experiencing a lot more load than previously. I've since tried to upgrade go (currently on 1.13.7) and also pop (on 5.1.3). Every time I look into the logs, it is the same pattern of logs: two transactions performing the exact same SQL query, followed by the pop warning about the database.yml and then the fatal error.
Even when we had some seriously verbose logging that made our devops guys' eyes water we didn't have any other log lines near these ones. I know the 2 SQLs were the exact same b/c I turned up the verbosity of the logs to 11 which includes logging the SQL statements, duration, any errors, etc and that's how I know they are exactly the same.
Will try to come up with console app to repro
Actual Behavior
the fatal error is, well, fatal and causes a panic which crashes the pod and all running transactions are lost
Info
prod env: k8s with debian latest docker image and go 1.13.7
from go.mod, here are my gobuffalo packages
github.com/gobuffalo/packr/v2 v2.7.1
github.com/gobuffalo/pop/v5 v5.1.3
github.com/gobuffalo/validate v2.0.3+incompatible
and other relevant packages:
github.com/jmoiron/sqlx v1.2.0
The text was updated successfully, but these errors were encountered:
Description
intermittently, and only in prod (so far) am I getting pods crashing from a "fatal error: concurrent map writes". I can't be 100% sure, but it sure does look like pop is always right there beside the fatal errors and it feels like there's a connection. Yeah, I know, "feels like" isn't the best way to describe the problem but I'm not sure what else might be going on here. I've searched my code for any global maps and there just aren't any. This service is very straight forward - it's an API CRUD service that does a bunch of DB calls, does some biz logic, maybe does some more DB calls, and exits the HTTP transaction.
Update: we resolved (i.e. hacked) this by surrounding the call to get a connection (get conn details, pop.NewConnection()) with a mutex
Steps to Reproduce the Problem
Attached are the series of log lines - no other log lines from my app are sent to stdout between these (and these are all on the same k8s pod).
log.txt
Also attached is the database.yml file with obvious secret stuff scrubbed:
database.yml.txt
Expected Behavior
I have been seeing this behavior for a couple months now - I don't remember exactly when it started, but it was during a significant ramp up for our services where we were experiencing a lot more load than previously. I've since tried to upgrade go (currently on 1.13.7) and also pop (on 5.1.3). Every time I look into the logs, it is the same pattern of logs: two transactions performing the exact same SQL query, followed by the pop warning about the database.yml and then the fatal error.
Even when we had some seriously verbose logging that made our devops guys' eyes water we didn't have any other log lines near these ones. I know the 2 SQLs were the exact same b/c I turned up the verbosity of the logs to 11 which includes logging the SQL statements, duration, any errors, etc and that's how I know they are exactly the same.
Will try to come up with console app to repro
Actual Behavior
the fatal error is, well, fatal and causes a panic which crashes the pod and all running transactions are lost
Info
prod env: k8s with debian latest docker image and go 1.13.7
from go.mod, here are my gobuffalo packages
github.com/gobuffalo/packr/v2 v2.7.1
github.com/gobuffalo/pop/v5 v5.1.3
github.com/gobuffalo/validate v2.0.3+incompatible
and other relevant packages:
github.com/jmoiron/sqlx v1.2.0
The text was updated successfully, but these errors were encountered: