Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

util: add computeTime option to inspect #19994

Closed
wants to merge 1 commit into from

Conversation

BridgeAR
Copy link
Member

@BridgeAR BridgeAR commented Apr 13, 2018

This adds a limit to the maximum computation time while using
util.inspect(). That makes sure the event loop is not blocked for
to long even though still having a good output in pretty much all
cases.

Fixes: #19405

Checklist
  • make -j4 test (UNIX), or vcbuild test (Windows) passes
  • tests and/or benchmarks are included
  • documentation is changed or added
  • commit message follows commit guidelines

@BridgeAR BridgeAR added the semver-major PRs that contain breaking changes and should be released in the next major version. label Apr 13, 2018
@BridgeAR BridgeAR requested a review from a team April 13, 2018 01:27
@nodejs-github-bot nodejs-github-bot added the util Issues and PRs related to the built-in util module. label Apr 13, 2018
@BridgeAR
Copy link
Member Author

BridgeAR commented Apr 13, 2018

@ex1st are you OK with the test? Otherwise I can modify it, it was just handy :-)
Update: I use a different test case now.

@BridgeAR
Copy link
Member Author

doc/api/util.md Outdated
* `computeTime` {number} Specifies the maximum time in milliseconds that
`util.inspect()` may take to compute the output. If it exceeds that time, a
`INSPECTION_TIMEOUT` warning is emitted. If the computation takes more than
100 ms above that time, it will limit all objects to the absolut minimum
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/absolut/absolute/

@mscdex
Copy link
Contributor

mscdex commented Apr 13, 2018

If we really want to go this computeTime route, I wonder if it would be better and/or faster to use process.hrtime() instead?

@BridgeAR
Copy link
Member Author

Comment addressed. @mscdex I thought about that as well but the implementation becomes more difficult in that case. Right now I can not think of a implementation with a single variable when using process.hrtime and I want to prevent passing through another context property.

New CI https://ci.nodejs.org/job/node-test-pull-request/14237/

@jasnell
Copy link
Member

jasnell commented Apr 13, 2018

hrtime is definitely the better choice

@Trott
Copy link
Member

Trott commented Apr 13, 2018

/ping @nodejs/util

@mcollina
Copy link
Member

Comment addressed. @mscdex I thought about that as well but the implementation becomes more difficult in that case. Right now I can not think of a implementation with a single variable when using process.hrtime and I want to prevent passing through another context property.

Here is a snippet to conver to hrtime to millis.

@addaleax
Copy link
Member

Is making util.inspect() unpredictable really a good idea? That should be a very conscious decision that we’re making @nodejs/tsc

@devsnek
Copy link
Member

devsnek commented Apr 13, 2018

if we're so worried about the time of this that we're adding timeouts to it, perhaps we should start looking at what takes up all this time. IMO if something needs a time limit like this it shouldn't exist how it currently is.

@mcollina
Copy link
Member

IMHO the best solution is to apply an algorithm that removes circular dependencies, such as https://github.com/davidmarkclements/fast-safe-stringify/blob/master/index.js#L18-L43.
It's a better approach. Circular dependencies are the problem.

doc/api/util.md Outdated
@@ -404,6 +407,11 @@ changes:
This is useful to minimize the inspection output for large complicated
objects. To make it recurse indefinitely pass `null` or `Infinity`.
**Default:** `Infinity`.
* `computeTime` {number} Specifies the maximum time in milliseconds that
`util.inspect()` may take to compute the output. If it exceeds that time, a
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: a `INSPECTION_TIMEOUT` -> an `INSPECTION_TIMEOUT`

lib/util.js Outdated
process.emitWarning('util.inspect takes very long.', {
code: 'INSPECTION_TIMEOUT',
detail: 'util.inspect() received an object that takes very long to ' +
'compute the output for. If it takes more than 100 more ' +
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: more than 100 more -> more than 100?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That has a different meaning though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe something like "If it takes 100 milliseconds more"? I am not sure, but more than 100 more sounds a bit heavy.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that is not unambiguous enough. I see your point of improving the wording though.

@BridgeAR
Copy link
Member Author

About the performance bottleneck: it is not about circular references. It is Object.keys. To make sure we always have all the necessary information it is important to always call that and in all big objects that I profiled so far it is the cause for 80-90% of all time that util.inspect takes. Switching to for in was not a viable alternative in my tests.

@addaleax

Is making util.inspect() unpredictable really a good idea?

I totally understand that this is not ideal but I doubt that blocking the event loop is any better. There were multiple cases before where people complained about a blocked event loop. That was also the reason for the maxArrayLength option. But relying on a arbitrary limits like depth, maxArrayLength or things like a max string length / max [any object structure type] will only stop some objects from taking to long to compute while others will be limited in their output for no reason.
That is why I decided to go for the computation time. Especially since we warn against relying on the output programmatically. However, it is possible for the user to have the old behavior by setting that option to Infinity.

@BridgeAR
Copy link
Member Author

I switched to process.hrtime, addressed the doc nit and increased the default timeout to 300 + 100 ms.

New CI https://ci.nodejs.org/job/node-test-pull-request/14247/

@bnoordhuis
Copy link
Member

About the performance bottleneck: it is not about circular references. It is Object.keys.

Then the logical thing to do is to concoct an Object.keys() that lets you limit the number of returned keys, isn't it?

Strawman: start with n=1e3 and keep subtracting the number of keys until n <= 0, then print dots to indicate there's more but we bailed out.

Switching to for in was not a viable alternative in my tests.

Why is that? Performance or compatibility?

lib/util.js Outdated
@@ -394,6 +404,23 @@ function formatValue(ctx, value, recurseTimes, ln) {
return ctx.stylize('null', 'null');
}

const now = getNow();
Copy link
Contributor

@mscdex mscdex Apr 13, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we just pass the previous process.hrtime() value to process.hrtime() to get a time difference and then compare that value instead?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but that requires a second property on the ctx object and I would like to prevent that.

@BridgeAR
Copy link
Member Author

@bnoordhuis

Then the logical thing to do is to concoct an Object.keys() that lets you limit the number of returned keys, isn't it?

Partially yes, partially no. To improve the performance of util.inspect significantly we need that plus and more importantly: a function to know the number of own properties on an object. Nevertheless, there will always be cases where the object is just to big to compute it all at once.

I also already spoke with @bmeurer about the problem with Object.keys because of util.inspect and hope that it will be improved further at some point.

Why is that? Performance or compatibility?

Both. The performance highly depends on what object it is. For e.g., arrays it is worse to use for in. And without using Object.keys we do not know the real number of properties and that would break the current output in multiple ways.

@bnoordhuis
Copy link
Member

more importantly: a function to know the number of own properties on an object.

We could add something like v8::Object::GetPropertyCount().

It's not exactly trivial and I'd have to check if properties of dictionary mode objects can be counted in better than O(n) time but I'm reasonably sure that for everything else it can be done in amortized O(1).

@BridgeAR
Copy link
Member Author

@bnoordhuis

We could add something like v8::Object::GetPropertyCount().

That is exactly what I asked @bmeurer for.

@BridgeAR
Copy link
Member Author

Just a heads up about for in: I just tried to optimize a few cases where we only had to know if there are any keys or none and the performance is the about the same. That is also what the profile output shows in those cases. I guess for in does about the same work as Object.keys before returning the first key.

 [Summary]:
   ticks  total  nonlib   name
    695    8.6%    8.6%  JavaScript
   7345   90.9%   91.2%  C++
    526    6.5%    6.5%  GC
     30    0.4%          Shared libraries
     14    0.2%          Unaccounted

 [C++ entry points]:
   ticks    cpp   total   name
   6412   94.2%   79.3%  v8::internal::Runtime_ObjectKeys(int, v8::internal::Object**, v8::internal::Isolate*)
    119    1.7%    1.5%  v8::internal::Builtin_ObjectGetOwnPropertySymbols(int, v8::internal::Object**, v8::internal::Isolate*)
 [Summary]:
   ticks  total  nonlib   name
    721    8.4%    8.4%  JavaScript
   7853   91.0%   91.3%  C++
    550    6.4%    6.4%  GC
     36    0.4%          Shared libraries
     24    0.3%          Unaccounted

 [C++ entry points]:
   ticks    cpp   total   name
   6854   94.0%   79.4%  v8::internal::Runtime_ForInEnumerate(int, v8::internal::Object**, v8::internal::Isolate*)
    150    2.1%    1.7%  v8::internal::Builtin_ObjectGetOwnPropertySymbols(int, v8::internal::Object**, v8::internal::Isolate*)

@BridgeAR
Copy link
Member Author

Seems like there was a bug in util.inspect after all... Object.keys was called for each circular entry even though that was not necessary at all. A fix is in #20007.

@jasnell
Copy link
Member

jasnell commented Apr 13, 2018

To give a better idea here, this definitely is not about circular references. The issue that triggered this is a failure in npm update after @BridgeAR changed the default depth to infinity. npm uses massive in memory objects that cause inspect to choke. At a depth of 8, one npmlog call generates a single string that is 6.6 M and over 81k lines long. These are deep objects that do not contain a lot of keys but there are many many of them. They also happen to contain the full readme.md content of every dependency in a modules graph. The current inspect impl is depth first and concats strings recursively upwards using a variety of string concats, template strings, and array joins. For this one object that results in an absolutely massive amount of memory thrashing.

@BridgeAR
Copy link
Member Author

No matter the two perf optimizations that I opened: I still believe that this is probably a good idea. The reason is that this way there is a guarantee that the event loop is not blocked for to long. It should probably only be a "last resort" thing, so I update the warning time to 1000 ms and the bail out to 1250 ms.

@BridgeAR
Copy link
Member Author

@BridgeAR
Copy link
Member Author

BridgeAR commented May 9, 2018

I updated the code to use process.hrtime() the way it should be used and renamed the option to maxClockTime. If we use the clock time or CPU time is something I personally do not care much about. Using a watchdog would also be more complicated. I personally think the implementation is sufficient for our use case. I discussed with @addaleax another option where we could check for a maximum number of entries instead of the time but that is still not completely reliable as the computation time is going to be different depending on the used system etc. It has the advantage that it would be a deterministic output in all cases.

CI https://ci.nodejs.org/job/node-test-pull-request/14745/

Copy link
Member

@bnoordhuis bnoordhuis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments:

  1. There is no opt-in or even an opt-out, it's an always-on feature. The best you can do is set it to a real high value. At least make it accept Infinity as an opt-out but I think it should be opt-in.

  2. Calling process.hrtime() so frequently is going to result in performance regressions.

This PR essentially assumes that process.hrtime() is free when in fact it can take 100+ microseconds on some systems. It's going to dominate with some workloads.

doc/api/util.md Outdated
@@ -423,6 +426,11 @@ changes:
`object`. This is useful to minimize the inspection output for large
complicated objects. To make it recurse indefinitely pass `null` or
`Infinity`. **Default:** `Infinity`.
* `maxClockTime` {number} Specifies the maximum time in milliseconds that
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just maxTime? I'm sure everyone will understand that to be wall clock time, i.e., human time.

maxClockTime inevitably raises the question "what clock?"

lib/util.js Outdated
@@ -86,7 +86,8 @@ const inspectDefaultOptions = Object.seal({
showProxy: false,
maxArrayLength: 100,
breakLength: 60,
compact: true
compact: true,
clockTime: 1000
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you misspelled this... you're using inspectDefaultOptions.maxClockTime further down at any rate.

lib/util.js Outdated
compact: inspectDefaultOptions.compact
compact: inspectDefaultOptions.compact,
maxClockTime: inspectDefaultOptions.maxClockTime,
startTime: process.hrtime()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you add a comma, the diff will be less noisy next time.

lib/util.js Outdated
@@ -433,6 +443,22 @@ function formatValue(ctx, value, recurseTimes, ln) {
return ctx.stylize('null', 'null');
}

if (getClockTime(ctx.startTime) > ctx.maxClockTime) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cache the result of getClockTime(). process.hrtime() is not necessarily cheap.

@BridgeAR
Copy link
Member Author

I completely rewrote the implementation to actually now rely on a "complexity" first. It is counted together by checking the inspected properties and by normalizing these in a few places (error inspection and complex keys take quite some time).

After reaching a complexity of 1e5 it will check each further 1e5 if a second has passed. If that is the case, it will immediately limit all further inspection to the bare minimum and trigger a warning.

Using this complexity should a) address the concerns about the overhead measuring the time and b) also address the concerns about a non deterministic return value. In my tests where my CPU was randomly on fire from different other sources it returned in about 70% the same entry and in all other cases just up to two other results. So the range in which the value might differ is now also limited to a absolut minimum and in most cases it should just always return the same value.

All of this does not apply to small objects as they would not reach the complexity of 1e5 or take more than one second to compute. The current numbers came out to be reliable in a bunch of tests that I wrote specifically to test for this but that do not seem to be useful to be added as general tests (I checked e.g., very different objects and the time it took to compute those, computation with a high heap memory usage and other things).

If requested, I can also open a new PR since the implementation differs significantly from the original approach.

CI https://ci.nodejs.org/job/node-test-pull-request/14914/
Benchmarks https://ci.nodejs.org/view/Node.js%20benchmark/job/benchmark-node-micro-benchmarks/195/

@BridgeAR BridgeAR added the notable-change PRs with changes that should be highlighted in changelogs. label May 16, 2018
@BridgeAR
Copy link
Member Author

This can also land as a semver-minor in case the default is first set to Infinity. That way everything would behave exactly as before.

As semver-major it could then be set to 1e5.

@bnoordhuis
Copy link
Member

Looks like related CI failures on Windows and Linux.

@BridgeAR
Copy link
Member Author

I fixed the CI issues by skipping the test on 32bit systems. The maximum string size can otherwise easily be reached in one second.

CI https://ci.nodejs.org/job/node-test-pull-request/14931/

doc/api/util.md Outdated
@@ -360,6 +360,12 @@ stream.write('With ES6');
<!-- YAML
added: v0.3.0
changes:
- version: REPLACEME
pr-url: https://github.com/nodejs/node/pull/REPLACEME
description: The `maxClockTime` option is supported now.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this obsolete? I can't find this option in the doc.

doc/api/util.md Outdated
description: The `maxClockTime` option is supported now.
- version: REPLACEME
pr-url: https://github.com/nodejs/node/pull/17907
description: The `depth` default changed to `Infinity`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this reverted? The doc still states 2.

@BridgeAR
Copy link
Member Author

It would be nice to get some reviews here :)

@BridgeAR
Copy link
Member Author

Rebased due to conflicts.

CI https://ci.nodejs.org/job/node-test-pull-request/15078/

This solves a big issue with util.inspect and it would be great if it could get any more attention. Shall I close this PR and reopen it? @nodejs/tsc PTAL

Copy link
Member

@mcollina mcollina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

lib/util.js Outdated
@@ -302,24 +303,29 @@ function inspect(value, opts) {
maxArrayLength: inspectDefaultOptions.maxArrayLength,
breakLength: inspectDefaultOptions.breakLength,
indentationLvl: 0,
compact: inspectDefaultOptions.compact
compact: inspectDefaultOptions.compact,
// If minComplexity is a negative value it will just immediately triggers
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: immediately trigger

doc/api/util.md Outdated
complexity while inspecting, the inspection is continued up to one more
second. If the inspection does not complete in that second, it will limit
all further inspection to the absolute minimum and an `INSPECTION_ABORTED`
warning is emitted. **Default:** `1e5`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’d prefer it if we could split this into one option that sets a complexity maximum and one that sets an overall time limit

And ideally both with value Infinity so that we can split the semver-majorness out into a separate PR

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am fine with setting this to Infinity first and then to change the default.

However, I would really prefer to have a single option for this. We already have quite a few options and IMO setting this super fine grained does not really help. I guess we could already call this maxComplexity instead of min while keeping the functionality as is and just rewording it a bit.

We could also bind the extra time to the requested complexity as in timeout after Math.floor(opts.complexity / 100) ms.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the rename to maxComplexity is a good idea.

I do think we should always provide some way to limit the complexity while still getting absolutely deterministic results (i.e. no timer). This PR optimizes for the common case of Node.js usage, but for some applications determinism is a paramount property and we should not disable those use cases.

I agree that we already a larger number of options here than what would be ideal, but I think making these two options would be the best approach atm.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought again about the necessity for determinism: if an application requires that, it would very likely require the whole object to be inspected. Therefore it would require setting the value to Infinity. The reason is that the application asked for the whole inspection, nothing else. Having a limited output due to a limitation in complexity on Node.js side would therefore always be an issue.

So I still believe we should keep the functionality as I implemented it right now. I also thought about overloading the input type to accept numbers as budget and strings with ms at the end as a time budget but I withdraw that after trying to look at it again out of the perspective of the described application.

@addaleax
Copy link
Member

This solves a big issue with util.inspect and it would be great if it could get any more attention.

I think it’s fine to ping @nodejs/collaborators for a change like this :)

lib/util.js Outdated
@@ -76,7 +76,8 @@ const inspectDefaultOptions = Object.seal({
showProxy: false,
maxArrayLength: 100,
breakLength: 60,
compact: true
compact: true,
minComplexity: 1e5
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a trailing comma? Ditto on line 309.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to keep this as is currently. I agree that it would be better to change this but it does not seem like others agree. So I would only want to change this if eslint/eslint#10350 gets implemented and we use that rule.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, fair enough.

I've been surreptitiously sneaking in commas though. Don't tell.

}
if (ctx.time === undefined) {
ctx.time = process.hrtime();
} else if (getClockTime(ctx.time) > 1e3) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this necessary? Leaving out wall time tracking results in a simpler algorithm.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The point is: without this it is hard to tell how much time this will actually need on different CPUs. Using the wall time is future proof by always delivering the best result with a best effort mechanism.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I mean is, is tracking any kind of time necessary?

What you call complexity, I would call budget. You start with a fixed budget and spend it until it's gone.

Since you have that kind of logic in place already, and since it's deterministic and clock agnostic, why also track time?

Copy link
Member Author

@BridgeAR BridgeAR May 28, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Budget is indeed a good description. I am just not sure if this would be understood by looking at a option named budget.

Relying on the budget completely without checking the wall time as well means I can measure the time it takes on my machine and estimate a arbitrary time limit that should not be exceeded in the default case (e.g. 1000-1250 ms as currently done). However, we also have Raspberry PIs and similar and their CPUs are much much slower and the same budget would get spend slower and therefore block the event loop longer. Our CPUs in general get faster over time with new inventions though, so the current default case for a fast machine would soon drop and the budget is spend faster. Meaning the maximum time the event loop would be allowed to be blocked would drop to e.g., 250ms instead of the current limit.

Using both together makes sure that it will always be around the 1000-1250ms range. That is the reason why I would like to have the combination of both, even though it is not fully deterministic after spending the complete budget.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a fair point. Have you considered splitting it out into two separate options, one for budget/complexity and one for wall time? Or are you saying that you feel a single option is best?

Copy link
Member Author

@BridgeAR BridgeAR May 28, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel a single option is best for two reasons:

  • We already have to many options in util inspect and the more we add, the more complicated it gets for the user. I believe we should just have very solid defaults while allowing the user to also change some things that are definitely necessary in some cases.
  • I do not see a necessity for having a fine grained control over this. The option is meant to prevent the event loop from being blocked to long while being able to also opt out of that or to stricten the rule further (by setting the budget to <= 0 even though the minimum time is still set to 1 second). The question is: is it necessary for the user to allow util.inspect to bail out before one second passed? I personally doubt that.

'INSPECTION_ABORTED'
);

util.inspect(obj, { depth: Infinity });
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add tests that test the limits here?

I think we need a test for the complexity parameter to make sure it works

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is tested implicitly here. Without the option being set the test would crash.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@BridgeAR I realize, but for example if we break the code that parses the option then tests wouldn't catch it - so if it's not a lot of trouble I think adding a second test that sets the configuration explicitly is a good idea.

Definitely not blocking on this though.

To limit the maximum computation time this adds a `minComplexity`
option. Up to that complexity any object will be fully inspected.
As soon as that limit is reached the time is going to be measured
for the further inspection and the inspection is limited to the
absolute minimum after one second has passed.

That makes sure the event loop is not blocked for to long while
still producing a good output in almost all cases.

To make sure the output is almost deterministic even though a timeout
is used, it will only measure the time each 1e5 complexity units.
This also limits the performance overhead to the minimum.
@BridgeAR
Copy link
Member Author

Closing in favor of #22756

@BridgeAR BridgeAR closed this Sep 12, 2018
@BridgeAR BridgeAR deleted the add-util-compute-time branch January 20, 2020 11:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
notable-change PRs with changes that should be highlighted in changelogs. semver-major PRs that contain breaking changes and should be released in the next major version. util Issues and PRs related to the built-in util module.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

npm: last nightly / v8-canary make npm update unusable