Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for the --audit-level flag passing to npm audit #4

Closed
archfz opened this issue May 10, 2020 · 6 comments
Closed

Support for the --audit-level flag passing to npm audit #4

archfz opened this issue May 10, 2020 · 6 comments
Assignees
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@archfz
Copy link

archfz commented May 10, 2020

It would be nice to only consider vulnerabilities after a given level.

@jeemok
Copy link
Owner

jeemok commented May 10, 2020

hmm it would be hard to implement with the current method - reads the vulnerability IDs from the generated report using a simple regex:

const SPLIT_REGEX = /(https:\/\/(nodesecurity.io|npmjs.com)\/advisories\/)/;
...
const rawIds = data.split(SPLIT_REGEX).map(str => str.substring(0, 4).trim());

One way that I can think of doing this is by collecting all the severity levels (low, high, critical, etc.) and also collecting all the vulnerability IDs, and map them together, but this might not be 100% accurate. I can add the flag --audit-level and pass it into the command, but it seems like the report will still be generated with all of the vulnerabilities.

I'll try to think of something about this, meanwhile, any ideas / PRs are welcome :)

@jeemok jeemok added enhancement New feature or request help wanted Extra attention is needed labels May 10, 2020
@IPWright83
Copy link
Contributor

I've put a Pull Request together that allows support for this, as-well as accommodating a production flag that I've raised another issue for.

Simplest approach was to use JSON for the processing.

@jeemok jeemok self-assigned this Jul 9, 2020
@jeemok
Copy link
Owner

jeemok commented Jul 9, 2020

JSON approach looks great! Thank you for contributing @IPWright83

@jeemok
Copy link
Owner

jeemok commented Jul 9, 2020

I tested with a few repositories, for some of my old repositories they have many vulnerabilities warnings, that I think resulted in an oversized buffer exceeds the limit and caused the child process to be terminated:

here I get the incomplete JSON buffer:

        {
          "id": 1490,
          "path": "jest>jest-cli>@jest/core>jest-haste-map>sane>micromatch>extglob>snapdragon>base>define-property>is-descriptor>is-accessor-descriptor>kind-of",
          "dev": true,
          "optional": false,
          "bundled": false
        },
        {
          "id": 1490,
          "path": "babel-jest>@jest/transform>micromatch>extg
undefined:22005
          "path": "babel-jest>@jest/transform>micromatch>extg
                                                             

SyntaxError: Unexpected end of JSON input
    at JSON.parse (<anonymous>)

So I put some logs to the buffer and I get this:

  • Each of the data chunk sizes is 8192 bytes (8 kilobytes)
  • Total of the JSON buffer size is 22316959 bytes (22 megabytes)

8192 bytes is the maximum limit for each chunk and I think we can't increase it (actually we don't have to deal with the chunk size). We only have to increase the maxBuffer size up from the default size. The current approach works great, I'll add the code to increase the max size.

Thanks again!
Mok

@jeemok jeemok closed this as completed Jul 9, 2020
@IPWright83
Copy link
Contributor

IPWright83 commented Jul 9, 2020

@jeemok just to check after re-reading this.

The buffer size shouldn't be a problem with the JSON approach as it just keeps adding to the jsonBuffer string via stdout. I think that's just going to keep streaming regardless of buffer size - or have I misunderstood there? That's specifically why I ended up listening to the end of the stream to parse the JSON because it was invalid unless you'd obtained the whole lot.

@jeemok
Copy link
Owner

jeemok commented Jul 9, 2020

@IPWright83 so I was testing with my old repository that has lots of vulnerabilities, and the JSON was invalid as the child process close before receiving all the chunks from the streaming. The default maximum size for the maxBuffer allowed was 1,048,576 bytes (1024 * 1024) stated in the documentation

maxBuffer <number> Largest amount of data in bytes allowed on stdout or stderr. If exceeded, the child process is terminated and any output is truncated. See caveat at maxBuffer and Unicode. Default size: 1024 * 1024.

So what I did is just increasing that to 50 MB and it should work for most cases. Actually, maybe we should add handling there if it ever exceeds 50MB, we should throw a warning or something.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants