-
Notifications
You must be signed in to change notification settings - Fork 400
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] unhandledException TypeError: segment.transaction.isActive is not a function #281
Comments
@ali-essam Thanks for reporting this issue, and for the workaround PR you submitted. Do you happen to know what/how this is being triggered? The fact that
is super confusing, and shouldn't be possible. Obviously it is possible, but we're curious as to what might be causing it. The fix you provided will prevent this error, but it also means a lot of data might be silently dropped on the floor, and we're not sure that's the right long term solution. (Although folks running into this issue are free to give your PR a try) Thinking this through out loud ...
We basically want to find out what's replacing the |
Hi @astormnewrelic, Thanks for the quick reply. I totally understand your concern regarding losing data silently, this hack was just to prevent the application from crashing.
However I'll try logging in production all the info I can about the segment and the transaction (in addition to the hack to prevent crashing), and will get back to you with the results. |
I'm currently running newrelic in prod from this branch master...ali-essam:tmp/debug-is-active. Now we wait and see if this issue happens again |
So I was trying to log the segment object Seems like
|
Okay, so that's what I got from the logs
However, I also noticed a crash with the following stacktrace. Seems like
I will keep monitoring to see the rate. @astormnewrelic Is there any piece of info I might log to help us debug? Since I'm unable to log the whole segment |
@ali-essam Interesting -- like most debugging it leads to even more questions :) Can you tie the bursting to particular routes (or code paths if this isn't a web application/service) being hit? If so, are those routes doing anything fancy/clever that your other routes are not doing? Talking out loud, it looks like somehow the segment's current transaction ends up being being assigned as a circular reference.
Obviously a bug, but the trick will be figuring out what code paths trigger it so we can reproduce the behavior and figure out which assignment is the problematic one. |
So I added some more logs, changed the json stringify library to one that handles throws and circular in a bit better way, also disabled the overridden toJSON implementation with undefined before serialization to avoid the bug in the serializer. master...ali-essam:tmp/debug-is-active-3
After some debugging, it doesn't actually seem that > const safeJsonStringify = require('safe-json-stringify')
undefined
> var x = {}
undefined
> x.toJSON = () => x
[Function]
> safeJsonStringify(x)
'"[Circular]"' What I'm probably going to try now is to remove all the custom toJSON from the prototypes themselves so I can better debug the output |
I just had the same issue seemingly on production. // Edit: |
/bug |
@astormnewrelic It's been sometime since I commented the custom
1433| var tracer = this.tracer
1434| var prevSegment = tracer.segment
1435| tracer.segment = segment You can see the modified code here: master...ali-essam:tmp/debug-is-active-4 I will try to add more logs to this part and see what I will get |
Do you guys use the logger package: I have the same problem in my graphql service, when I send multiple requests to another server. I . There will be an error: this._segment && this._segment.probe('Segment removed from tracer')
^
TypeError: this._segment.probe is not a function So I start to debug this one. Because we will send the request to another elastic search server, and the Elastic search client will print so much log which will affect me to see the log. So I remove the log. const client = new elasticsearch.Client({
host: ELASTICSEARCH_HOST,
// log: LoggerForES
}) After I remove it, the error never show again. |
@wszgxa Yes we do use winston, mainly for http request and error logging. But are you totally sure this is the issue not just correlation? If it is, do you have any thoughts how could this lead to this behavior? |
If you want to know what exactly is happening in your system you will have to open up the node inspector and see what is happening (assuming you are reliably able to reproduce an error): I've attempted to reproduce with pino and winston just now, but am unable to reproduce what I was seeing before. My best guess is that that fast-safe-stringify mutates our new relic objects (https://github.com/davidmarkclements/fast-safe-stringify/blob/master/index.js#L23). The new relic objects have some setters (would need more time to look into this) that are causing some sort of side-effect when this happens. Normally fast-safe-stringify will 'reset' the object after it is done stringifying it, but if these objects have custom setters (and a lot of them do, and they are all different from one another), it could be causing fast-safe-stringify to not fully reset these objects and leave in the string I have not looked into winston too deeply yet. Do you guys use winston with some sort of pretty print functionality? |
Hey all, I've pushed up a best guess for a fix to this issue on this branch. It seems like this issue is somewhat problematic to track down and reproduce, so we can at least try testing the potential fix against the environment that is most consistent in (i.e. your environments). If/when you find some time to do some more testing, try this change out and gimme a +1/-1 on whether it makes a difference. |
This should in theory work because it prevents the property from being enumerated in an I will see if I can reproduce this and report back when I get the chance. |
Hi, i have same problem and i can confirm 'fast-safe-stringify' is causing this:
|
With version Has anyone upgraded and are they still having issues? |
We haven't heard any more cases of this recently, so I'm going to close this one out. Feel free to reach out if that is not the case. Thank you, Michael |
Hi. I'm still having this issue in version 6.2.0
I've got this after changing from Bunyan to Pino logs. Is there any help I can get/provide? |
@daviddutch thanks for letting us know. I've reopened the issue. So far, all the cases we've found of this problem there has been a library deleting/modifying properties on objects we pass around to keep track of state for our instrumentation. Where we can, we are attempting to limit/prevent that. Are you logging out request information? Or other similar framework objects? We can do some |
Closing this out due to inactivity. If anyone's still running into this issue and can provide details (or better yet, a reproduction) please don't hesitate to reopen and/or contact support directly. |
newrelic
causes unhandledException and crashes the processThe text was updated successfully, but these errors were encountered: