Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Unrecognized version NaN of Express detected; not instrumenting" #44

Closed
aseemk opened this issue Sep 23, 2013 · 22 comments
Closed

"Unrecognized version NaN of Express detected; not instrumenting" #44

aseemk opened this issue Sep 23, 2013 · 22 comments

Comments

@aseemk
Copy link

aseemk commented Sep 23, 2013

We upgraded node-newrelic from v0.9.20 to v0.10.3 and saw this. Just upgraded again to v0.11.0 now and the error remains.

Beautified:

{
    "name": "newrelic",
    "hostname": "876c9ef8-f789-4e00-b28d-eebd7768a7ec",
    "pid": 10,
    "component": "express",
    "level": 40,
    "msg": "Unrecognized version NaN of Express detected; not instrumenting",
    "time": "2013-09-23T17:33:49.294Z",
    "v": 0
}

We're running Express 2.5.11 still unfortunately. We'd obv like to upgrade to Express 3, but it's a major upgrade for us and we can't prioritize that just yet.

The interesting thing is that data still seems to be getting sent to rpm.newrelic.com. So is this really a non-issue?

@othiym23
Copy link
Contributor

It's a real issue! We were relying on there being a version number set on the Express module's exports for Express < 3.0, and apparently that's not a safe assumption! There will be a fix for this in the next beta. Thanks for the report!

@othiym23
Copy link
Contributor

Can you compress newrelic_agent.log and send it to me at [email protected]?

This may or may not be an issue in practice -- are you seeing the new Express-based route names that the README talks about for version 0.11? I'm looking at the relevant bits of Express 2.5.11, and the error you're seeing shouldn't be happening, as 2.5.11 has an exports.version that the New Relic module can use. Are you using some kind of wrapper around Express in your application?

@aseemk
Copy link
Author

aseemk commented Sep 24, 2013

Can you compress newrelic_agent.log and send it to me at [email protected]?

We're running on Heroku, where you can't access the filesystem of running dynos, so we have NEW_RELIC_LOG set to stdout (along with NEW_RELIC_LOG_LEVEL set to warn). How can I get you what you need?

Are you seeing the new Express-based route names that the README talks about for version 0.11?

Yes, I think so. We're load testing a route that's registered as:

app.get '/:id/:rel', resources.listConnections

And our second-most common web transaction on rpm.newrelic.com is /GET#/:id/:rel. (Is the # on purpose? Any reason you guys don't just use a space?)

That said, we do see /* as our most common web transaction — by a factor of ~4x — which suggests something is wrong. Our only route with * in it is our very last catch-all/fallback route:

app.all '*', (req, res, _) ->
    # return 404 with a proper JSON error body

Screenshot:

screen shot 2013-09-23 at 9 19 46 pm

Are you using some kind of wrapper around Express in your application?

No, but our setup does use vhosting with two instances of Express, if that matters. (Related: issue #32 for that leading to double-counting requests.) Our folder structure looks like this:

api/
+ app.coffee
+ node_modules/
  + express/

www/
+ app.coffee
+ cluster.coffee
+ node_modules/
  + express/

With www/cluster.coffee (which is what we run on Heroku) having code like this:

api = require '../api/app'
express = require 'express'
www = require './app'

app = express.createServer()
app.use express.vhost settings.API_HOSTNAME, api
app.use express.vhost '*', www
app.listen settings.PORT

Does that help?

@aseemk
Copy link
Author

aseemk commented Sep 24, 2013

When I load test a route registered as:

app.get '/:id', resources.get

The transaction name detection is worse FYI:

screen shot 2013-09-23 at 9 34 18 pm

@othiym23
Copy link
Contributor

We're running on Heroku, where you can't access the filesystem of running dynos, so we have NEW_RELIC_LOG set to stdout (along with NEW_RELIC_LOG_LEVEL set to warn). How can I get you what you need?

Up the log level to trace and restart your application, capture about a minute of output (I just want to see how the instrumentation is being bootstrapped), dump it in a file, and send me the gzipped results.

(Is the # on purpose? Any reason you guys don't just use a space?)

You're not supposed to see that. ;) Figuring out how to present that to customers from the New Relic side is one of our outstanding to-dos before general release. I chose # because I wanted something that was unlikely to actually be in a route.

That said, we do see /* as our most common web transaction — by a factor of ~4x — which suggests something is wrong. Our only route with * in it is our very last catch-all/fallback route:

app.all '*', (req, res, _) ->
    # return 404 with a proper JSON error body

You wouldn't see /* for that, you'd see /GET#*. Are you using the static middleware? Unless you have no static resources, those requests are probably all getting rolled up to /* by our naming logic.

No, but our setup does use vhosting with two instances of Express, if that matters. (Related: issue #32 for that leading to double-counting requests.)

Thanks for reminding me of this; I'll take another look at it now that the request-naming logic has been added.

Are api and www going to end up running different versions of express? I'm curious as to why you have express installed for both apps if you're going to be using vhost.

@aseemk
Copy link
Author

aseemk commented Sep 24, 2013

Log file sent. Lemme know if you didn't get it.

Are you using the static middleware?

We're load testing our REST API, which does not use a static middleware, no.

Are api and www going to end up running different versions of express? I'm curious as to why you have express installed for both apps if you're going to be using vhost.

We have them running the same version of Express right now, and in practice, we'd keep it that way. The reason for having both installed is partly (a) idealism and partly (b) pragmatism.

Idealism: they're written to be independent npm modules, and npm is all about local, isolated dependencies, so shouldn't they have their own instances of Express?

We only vhost to work around Heroku's limitation that there can only be one "web" process (i.e. has access to an external port) within a Heroku app/repo.

In practice, Express (2.x at least; haven't investigated this area of 3.x) modifies native http prototypes (ClientRequest, etc.), so we do make sure we're running the same version of Express across both.

But that's also why we split them: in case Express creates any state within a require()'d instance (but not within an app instance), we don't want to have conflicts between www and api.

That was a long-winded explanation, but hopefully that explains why we do this. =)

@aseemk
Copy link
Author

aseemk commented Sep 24, 2013

All that said, though, if you do feel that vhosting across Express instances is an unsupported use case, I'd be happy to reconsider and maybe give it a shot to consolidate/share the Express dependency between the two. I'm just not sure I can prioritize that for a week or two, so we'd have to unfortunately live without New Relic for the time being.

@aseemk
Copy link
Author

aseemk commented Sep 24, 2013

Another data point for the transaction name issue: just load tested a POST /users route, registered explicitly as app.post '/users', ..., and I only see it as /*.

@othiym23
Copy link
Contributor

I suggest this with the utmost humility, but all three of Nodetime, StrongOps, and New Relic do weird things to your Node process in order to extract metrics. If you're running more than one of us at a time, you may want to reconsider that. 😁 All three modules should get along, but I think it's pretty safe to say that running more than one at the same time is unsupported.

There's a lot going on in the logs you sent:

  1. There are four Node processes pulling in New Relic in the log you sent me, all of which are applying their own instrumentation and maintaining their own connection to New Relic. This isn't necessarily a problem – it's how New Relic works in apps that are using cluster – but it does make reading the logs a little tricky. Are you using cluster? It's not included in the snippet that you included above.
  2. The use of express.vhost does indeed explain why your requests are being counted twice. Your requests are being handled by two HTTP listeners, both of which are instrumented: one for the part running vhost, one for either api or www. There are probably ways to get around the double-counting, but this isn't a situation we've run into elsewhere, so I guess what I'm saying is that this is an unsupported configuration at the moment.
  3. The error message about Express that led to you opening this ticket is indeed a red herring -- something inside your modules is calling require('express') on something that isn't actually the main Express module.

As for the metric naming issues, I'll have to set up a test app using vhost and see what's going on. The function I'm wrapping to get the route names should be pretty robust, but the internals of Express and Connect are pretty complicated, and it'll take some time to parse out what's happening. Given that New Relic for Node is still officially unsupported, I'll have to poke at this in my own time, but I am curious as to what's going on here.

@othiym23
Copy link
Contributor

I can tell you that my preliminary results suggest that if we can solve the double-counting issue, we can solve the metric naming problems. 🎉

@othiym23
Copy link
Contributor

I think I have a solution for both the double-counting and the weird metric naming issues, but it will require careful testing before I can put it out in a new build, because it makes a fundamental change to how the module turns web requests into New Relic transactions. It still doesn't address the original issue on this ticket, which I haven't been able to reproduce in my dev environment.

@aseemk
Copy link
Author

aseemk commented Sep 24, 2013

I suggest this with the utmost humility … If you're running more than one of us at a time, you may want to reconsider that. 😁

Good to know, but unfortunate! Nodetime and New Relic both have their strengths, and while Nodetime is further ahead right now in some ways, we like to follow along with New Relic progress as we think it's promising. =)

Nodefly we just added to try out also.

I'd be curious to know how many other people out there use both Nodetime and New Relic...

Are you using cluster? It's not included in the snippet that you included above.

Yep. The snippet I included above is just wrapped in the else case of an if cluster.isMaster check. (The if case just spawns workers.)

If the issue is just readability of logs, I'm happy to turn that down to 1 process next time you need logs. =)

something inside your modules is calling require('express') on something that isn't actually the main Express module.

Interesting! Can you clarify what you mean? All three files — cluster, www, and api — are calling require('express'), because all three of them fire up Express servers. The cluster and www files share the same Express.

Great to hear of the fix you're working on. Thanks for the help!

@othiym23
Copy link
Contributor

I'd be curious to know how many other people out there use both Nodetime and New Relic...

I don't know but could find out. Not that many, is the quick answer.

If the issue is just readability of logs, I'm happy to turn that down to 1 process next time you need logs. =)

No, it's easy enough to sort through them all (big ups to Bunyan for providing such a great interface for filtering logs), was just curious as to why I was seeing multiple processes in the logs. I'd check with @zeke and see if having cluster is going to buy you much on Heroku; I don't know if dynos have access to multiple CPUs.

I've published v0.11.1 to npm; at the very least this closes #32. Let me know if you're still seeing the issue with Express after this, and I'll see if I can dig in further. I'm not sure the double-counting and the log message are connected. I am curious to see what this does to your transaction volume and naming, though!

@aseemk
Copy link
Author

aseemk commented Sep 25, 2013

I've published v0.11.1 to npm

Thanks! Unfortunately, I just tried it, and it throws this error on startup:

(function(){return function static(req, res, next){return agent.tracer.callbac 
                   ^^^^^^^^^^^^^^^ 
SyntaxError: Use of future reserved word in strict mode 
    at wrapHandle (/app/node_modules/newrelic/lib/instrumentation/connect.js:72:70) 
    ...

This is certainly coming from the eval you added in bf14652 encountering Connect's static middleware. I got a kick out of that commit's code comments, btw. =)

On that note, looking at the release notes for that change:

Changed the Connect instrumentation to preserve the names of middleware functions after wrapping them. If you need this change, you should probably change your code so you don't need it anymore.

Just to clarify, was this affecting us? Or was this unrelated? If it was us, what are we doing bad that necessitated this? (Using multiple instrumentation libs?)

Let me know if you're still seeing the issue with Express after this

I am:

{"name":"newrelic","hostname":"5defab5f-4f70-4ede-8bb1-7b11be611a43","pid":12,"component":"express","level":40,"msg":"Unrecognized version NaN of Express detected; not instrumenting","time":"2013-09-25T18:50:23.147Z","v":0} 

And that line comes before the static error above.

I can get you detailed trace logs again if that helps. Lemme know if you'd like 'em.

I'd check with @zeke and see if having cluster is going to buy you much on Heroku; I don't know if dynos have access to multiple CPUs.

Good suggestion to sanity check this, but we do see 4 cores on Heroku FWIW, otherwise our cluster module wouldn't be spinning up workers. Even if that's wrong, we're on 2X dynos, which may make a difference?

https://discussion.heroku.com/t/recommended-use-of-nodes-cluster-module/96/3

I've shot off an email to Heroku support (which should reach @zeke) to confirm. Thanks for the suggestion!

@othiym23
Copy link
Contributor

Thanks! Unfortunately, I just tried it, and it throws this error on startup

See #46, for which we'll be landing a fix (which we will then publish on npm). If you wanted to file a bug on Connect and / or Express pointing out that static is a really bad name choice for that middleware, I wouldn't complain.

Just to clarify, was this affecting us? Or was this unrelated?

It's not unrelated, but this is on New Relic to fix. That note was targeted at people who are crawling their Express application's middleware stack looking for specific middleware by name. The number of people trying to do crazy metaprogramming shenanigans in Node should be kept to a minimum.

And that line comes before the static error above.

I can get you detailed trace logs again if that helps. Lemme know if you'd like 'em.

Thanks for the confirmation. I'll keep poking at this. I'm pretty sure that you can ignore the error for now, but I'm going to confirm that nothing is evading the instrumentation hooks before I close the issue.

I'll leave a note here when 0.11.2 goes out. Thanks for testing 0.11.1, and sorry about the bug!

@othiym23
Copy link
Contributor

The startup crash is addressed by 8a317f5. I'll keep looking at the startup errors.

@aseemk
Copy link
Author

aseemk commented Sep 26, 2013

Great, thanks. Upgraded to 0.11.2, and it works again for us.

We still see this NaN version error, FYI. I can't tell you yet if the transaction naming or double-counting are improved/fixed, because we were hammering on development pretty hard just before this, so I'll get back to you on that tomorrow or Monday.

Noteworthy though, I see an interesting new error that I've never seen before:

{"name":"newrelic","hostname":"ba38f632-9d3d-4733-9e03-dc52b917fe4e","pid":8,"component":"express","level":50,"msg":"No Express route to use for naming.","time":"2013-09-26T23:42:23.841Z","v":0} 

This matches the time around which we returned our only 500 response, in a round of testing I just did.

I can try to repro this later and send you trace logs again if you'd like. Thanks again for working on all of this!

@othiym23
Copy link
Contributor

othiym23 commented Oct 2, 2013

I think this is probably (probably!) something to do with the cluster module and can be safely ignored forever, but I still have on my todo list giving this a more thorough investigation and seeing if I can eliminate the logged error if it's spurious.

@othiym23
Copy link
Contributor

Hey, Aseem!

This ticket is about a billion years old at this point, but I was wondering if this continued to be an issue for you. We haven't yet gotten around to tackling vhost support in Express (mostly because you're the only person who ran into this issue), but we did nerf those chatty error messages, so thought it was worthwhile checking in with you.

Let us know! Thanks!

@zeke
Copy link

zeke commented Jan 18, 2014

I still get notified about this thread. Hi guys. Happy new year.

@othiym23
Copy link
Contributor

othiym23 commented Feb 1, 2014

Hi, @zeke! This issue is going to take a nap in the warm, soft dirt now. Aseem, feel free to open a new issue if you run into this again. We will get to vhost support one of these days.

@othiym23 othiym23 closed this as completed Feb 1, 2014
@aseemk
Copy link
Author

aseemk commented Aug 8, 2014

Sorry for the delay @othiym23! This got lost in the busy-ness of day-to-day work.

We are planning to upgrade our node-newrelic soon and will let you know!

cmcadams-newrelic pushed a commit to cmcadams-newrelic/node-newrelic that referenced this issue Jan 29, 2024
Add Prisma and Sequelize sample apps
jsumners-nr pushed a commit to jsumners-nr/node-newrelic that referenced this issue Apr 11, 2024
Updates readme to standard template with some content modifications.
bizob2828 referenced this issue in bizob2828/node-newrelic Jul 26, 2024
fix(Next.js): Provide tech writing review
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants