-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exploring security policies in Node core #327
Comments
This sounds like a great idea to me, and good to get the discussion into an issue were everybody can contribute. |
We might want to involve folks from the Module Team here. I have to check if I can free some time to spend on this issue. But that's super exciting. |
I also chimed in with concerns about exceptions to policies and resource integrity checks. In particular I want to evaluate the granularity of permissions and sharing. Node's core is not incredibly robust and can easily be mutated in ways to make it less robust; leaking permissions seems realistic so I think these measures are more suited for accidental usage and real security is going to keep relying on ensuring auditing of code which we also cannot currently enforce on code being evaluated within Node. |
What do I need to do to get into this conversation? I've been working on how to reserve privileges to some modules but not others. That seems to require some notion of module identity. Being able to open a sink like Finally, gating access to commonly misused sources of authority like |
@mikesamuel: I'm trying to get the conversation to be in the issue tracker as much as possible so that it's accessible and easy to look at the history as we move forward exploring these topics. So consider yourself part of the conversation! On Slack, we've started to talk about maybe meeting up at Node Summit. Will you be around by any chance? (by the way, to join our Slack, go to https://nodejs-security-wg.herokuapp.com/, but I'd prefer to move as much of the conversation to this issue tracker as possible). Thanks for linking to
Do you think that's a good model to aim for for initial policies in core (I think this is one of the first things we need to decide)? I'll take a closer look at your module in the coming days. |
Worst thing we could do would be to ship a mechanism that would give a false sense of security to the users. I believe having a fine per module permission system, would require at least long stacktraces everywhere which, even with I'll be staying in SF a few days after Node.js Summit and would be available for a physical + hangout work session on that topic. |
@vdeturckheim: if I'm understanding you correctly, you're saying that we should only consider policies resilient against malicious attackers? I think that's worth exploring (and in fact is what we do at Intrinsic, though the other details are quite different: we have very fine-grained policies and many isolation contexts), but that implies a lot of other complications. For example, for the PoC you described, would you disallow all native modules and child processes (otherwise a malicious attacker can just reimplement that functionality themselves)? We'd also need to make changes to In my opinion, I think per-module policies don't make sense combined with a malicious attacker model (without massive semantic changes): modules need to interact too much with each other and it will be very difficult for users to think about the effect of the policies. (btw, I'm out of town starting July 27, and unavailable on the 25th, so I'd prefer to meet on the 24th or 26th if possible) |
An in person meeting would be very useful. I'll also be speaking about binding bugs at Node Summit FWIW. My 2c on the above discussion(s): I think scoping the attacker model and kinds of policies (at a high level) we want is the most important thing to try to tackle first. I'm worried about introducing mechanisms before we figure these things out. |
@drifkin said
Yep. I'm talking at 11:35 on day 1 about "Improving Security by Improving the Framework." </plug shameless>
Yes. I think that running user code in the same realm as
Without access to the prior discussion I'm not sure I can answer that question. In my experience, in-realm language based enforcement mechanisms and boundary mechanisms like Intrinsic's filtering membranes or syscall filters are often complementary. +1 to what @deian said. Expanding on the different models and exploring how they fit together would be a good use of time at the summit.
This is a good point. I think we need to clearly communicate what we provide w.r.t. confidentiality. I think we can provide plenty of integrity improvements though.
I presented a demo of some of this in my recent jsconf.eu talk.
https://github.com/mikesamuel/jsconf-eu-2018 |
<side_node>@mikesamuel I think we have to grab a drink at Node Summit to discuss your last presentation and compare how Sqreen works with that.</side_node> I will be in SF from Sunday July 22nd to Saturday July 28 (mid day) and I am mostly needed at the conference on day 2.
|
@vdeturckheim |
I'm around on the 26th. I'm pretty busy the three days of the conference. |
My first 2 cents is that we should include thinking about what "hooks" in node core would allow additional controls to be added as opposed to everything being part of core itself (in keeping with the small core philosophy). Might not be possible due to overhead but worth including as part of the discussion/thinking. |
@mhdawson definitely. I don't want us to end up with a domain-like feature with impacts everywhere in the codebase. However that might be an optimistic wish. |
From this thread, it sounds like the 26th (the day after Node Summit) would work the best to meet up to discuss policies. /cc @mhdawson @mikesamuel @vdeturckheim Who else is interested? We'd be happy to host at the Intrinsic offices (we're in the financial district in SF). @bmeck are you still interested in joining remotely? |
LGTM, I have an meeting at 10 in SOMa but I should be able to re-schedule it if needed |
Yes I would like to attend, that is the last day of TC39 and I'm not sure but there might be a fly on the wall or 2 that want to listen in from their end. |
@drifkin +1 |
It was an awesome experience attending security working group meeting at Intrinsic, thank you very much to the organizers! After listening to the discussion, I have these questions / observations. Is the purview of the working group limited to malicious code injection and vulnerabilities thereon? doesn't it cover application and platform security in general in Node's context? Or is it that this sitting was focussing only on the malicious code injection topic? [Context from other platform say Java] Java treats SDK's own Java APIs as trusted, and everything else as untrusted. A security manager is defined, that is programmable and tunable to define policies on Subjects, Principals and Users, with granularity of policy going as down as property access restrictions on objects. A JVM wide master object anchors all the security operations in the application. The common target of attackers is to Within Node's context of protecttion from malicoious code, I believe it is important for defining the scope and setting the premise, before we examine the implementation details: for example:
I guess a consensus from the meeting was to treat built-in APIs and application as trusted, and everything else (modules, dynamic code) as untrusted? |
My recollection was that no consensus was reached on that question, but that a consensus was reached that resource integrity is within security-wg's purview. I think there was a consensus that built-in modules are, and will continue to be confusable -- meaning that built-in modules do not maintain invariants in the face of things like malicious prototype monkeypatching and stack alignment attacks. Noone seemed skeptical when it was claimed that changing that would require a large & ongoing effort by maintainers. So builtin modules do currently trust application code. My argument throughout is that trusted/untrusted need not be a binary distinction. I think we can make progress on many fronts by, during production,
and that we ought independently pursue efforts to limit ecosystem-level threats due to abuse of developer commit privileges.
No consensus. I answer no, and believe there are use cases for |
My understanding is that the Node.js model has always been to assume trusted code. It does not have the equivalent to a security manager so any code can use any of the available APIs. Unlike the browser, you actively install code locally as opposed to executing code that is dynamically pulled from external sources. A change to this assumption would be a fundamental change and during the meeting we circled a few times, coming back (at least in my understanding) to a consensus not really feasible to change it in node core (which matches up with what @mikesamuel said above). As stated above, though, it does not mean that we can't still do things that will improve the security posture when running Node.js. |
thanks @mikesamuel - that clarifies many things. However I should admit that (not being a security expert) I do not follow few terms.
Yes, I think it is deeply rooted to the language semantics itself: JS being dynamicaly typed implies objects are dynamic, and regulating object access and transformation in the pretext of security does not seem to be optimal and maintainable.
Can you please provide an example for this? (confusable inputs)
This is where I see a practical difficulty: Assume I have to use a module
So from a consumer point of view, defining access control may be painful. It may be great if this is achieved through a trusted software authority
Agree, makes perfect sense! Also how about defining security adherence policies, best practices (and certifying thereon) for modules? @mhdawson - thanks, agree: defining and applying security policies at the node core is not i)feasible ii)bulletproof ii)maintainable. So leaving the core as an efficient Javascript execution platform with security policies defined, scopped and implemented at the module level (source, load, production) looks like the path forward. |
@erights did explain how Frozen Realms can address prototype poisoning. I was stating my sense of the room, though Mark and I might be more optimistic than that. Putting builtin code in a separate realm is not a trivial change though.
Sorry. I didn't mean to talk about "confusable inputs."
I don't think this is the case. I've worked on a team that has managed these kinds of access controls for a much larger application group. If a module doesn't explicitly require a dependency, then we can assume, absent evidence to the contrary, that it doesn't require it. We have pretty reliable ways to find contrary evidence -- run the tests and see what the module does. We can recommend ways for library developers who get reports that access was denied in production -- add more tests.
Does m in your scenario directly
Is the best practices badge project aiming towards some of these goals?
I think @mhdawson was talking specifically about whether we treat module code as malicious. That's a separate issue from whether we define and apply security policies in core. For example, resource integrity -- making sure that only code that should load actually loads -- could be done in core. I think it is feasible to get bulletproof, maintainable resource integrity checks. And without resource integrity, there's no clear relationship between the code loaded by core and the module code that we're debating whether we trust or not. Enabling features that are necessary for many application-specific security stories are, IMO, good candidates for support in core where feasible & maintainable. |
@deian did point out in his talk that polymorphic values are an oft-overlooked problem in both builtin module code and in C++ binding code. Frozen realms would not address that. f({ i: 0, toString() { return this.i++ ? ', evil()' : '123' } })
// Returns a valid identifier
function f(string) {
if (!/^\d+$/i.test(string)) { throw new Error('...'); }
return 'foo_' + string;
} |
Hey folks just a heads up that @addaleax has made an implementation of access control policies nodejs/node#22112 |
In terms of:
Yes, I mean that introducing controls with the aim of getting to the point where we can treat module code as malicious. Controls may still be useful for other reasons. |
Moving this here from nodejs/node#24908
This seems in line with the goals of Constraining APIs. Concerns were raised in the original issue about adding more details to It was also mentioned about the context of |
@robbiespeed Are you familiar with the sensitive modules hooks previously discussed on this thread? The attack-review-testbed's package.json defines "sensitiveModules": {
"child_process": {
"advice": "Use safe/child_process.js instead.",
"ids": [
"main.js",
"lib/safe/child_process.js"
]
}, which wires into sensitive-module-hook.js which vetoes unapproved loads of sensitive modules. (The attack-review-testbed would have prevented exfiltration by flatmap-stream because |
A good high level summary of the issue and what's needed to address it is POLA Would Have Prevented the Event-Stream Incident by @katelynsills Several of us are now involved in designing such a module system for SES, for providing libraries --- including many legacy libraries --- least authority across JS hosting environments (Node, browsers, IoT, blockchain). We keep coming back to this incident as a revealing test case. |
@mikesamuel Just looking at that now, my understanding could be wrong, but does it require that the user explicitly define the white list for each of the dependencies it uses? This seems like an issue to me, as it would require a lot of manual work. Would packaged modules be able to define which core modules they require? @erights Great article, basically was what I was trying to achieve with my proposal. Is this example syntax from the article the current direction being explored?
I like the idea that in js you are directly passing the dependencies upon use. However it wouldn't play nice with import syntax. I guess in an ideal world all dependencies would have no access to core modules, and then that would force library authors to write their apis to be used like:
That combined with access control policies would probably cover the bases pretty well. |
Not literally. It is meant to be suggestive of the elements that need to somehow be present in any solution. The hard problem we are currently wrestling with is the conflict between aspects of current widespread coding patterns: Module-to-module imports, and package-to-package dependencies, come in graphs, not trees. A module can be imported by many other modules, and a package can be depended upon by many other packages. This raises the issue of where policy --- of what authority should be granted to the module or package --- should be expressed. The example code from the paper suggests that the authority be provided at the importing site. However, in order for multiple importers to share the instance they are jointly importing, these separate grants would somehow needs to be merged. Or, each import site that expresses such a grant could get its own instance. Neither of these work well for JS. Or, the enclosing container --- the app as a whole --- could express what authority is granted to each of the packages in the app as a whole. This requires the app author to have global knowledge of all the packages being linked together to form the app. Or, we can introduce more structure into the expression of inter-package dependencies, so that the locality of policy expression can follow the natural locality of knowledge as programmers separately develop packages that get linked together. We expect to have something readable soon on our design. |
The dev team has to whitelist modules that may use sensitive modules. Uses of non-sensitive modules need not be recorded anywhere. For minters, the dev team can use a combination of whitelists and self-nominate and second: """ |
In teaching people about Node we always run into the issue of explaining why What if Node.js could take advantage of this? What if
And if a submodule has Note: I am not suggesting that the the whole fs is installed with that package. Just a marker file that specifies that fs is supposed to work |
@martinheidegger I feel like this would more solve the per-process limitation use case thant the per-dependency one. Actually, if fs is authorized, someone could dynamically add new files in the node_module directories to get the authorizations right? |
@vdeturckheim I believe with a little tinkering a per-package solution (not module) could work with this, which I generally think is slightly more practical than per-module. Yes, the fs authorization gives a module super access, just by the fact that it could theoretically rewrite the nodejs binary and replace it with a hacked one. Node could put limitations in place though. |
@martinheidegger The target app's sensitive modules config restricts access to fs. @vdeturckheim The target app's resource integrity checks prevent loading of modified source files. An attacker would need to be able to abuse write access and generate a sha-256 hash collision to before the app would consider loading their modified source file. Since the target app locks down dynamic code loaders like
|
@mikesamuel the target app uses the |
@martinheidegger Where would configuration related to granting privileges to modules go ideally? You're right that the setup is tricky at present. I hope to bundle a lot of the setup so that a blue-teamer can integrate it by choosing a la carte, but that level of ease of use is not there yet. |
@mikesamuel In one-on-one conversation I heard before from Node.js maintainers that the package.json is something they don't want to rely on as it is owned by NPM and not Node. Also, it is possible to write Node projects entirely without setting a package.json. Security might be relevant in those cases as well.
I would separate between two different users the "package developers" (devs) and the "package users" (user). Preferable they share the effort. Preferably I would have both do a part of the work. The devs need to have a good way to specify that their package requires a certain permission. While the users need to grant the permissions to the packages on/after install. To me asking the devs to do a |
So … “Node.js maintainers” are not a homogenous group, and we have a lot of different opinions. If we’re talking about adding real per-package or per-module config, I wouldn’t discard package.json as an option; the biggest difficulty might be the fact that Node.js modules and npm packages don’t map 1:1. |
Oh totally. I just stated the reason why I thought about a solution outside the package.json; I don't remember the persons name, just the context - its a long while ago, that person may have changed their opinion by now. I personally tended also to go directly to the |
@martinheidegger Thanks for explaining. I'll keep an ear out for arguments about which configuration is best placed where. I am looking for red-teamers to help stress test that. If the biggest problem with that code is that blue-team maintained configuration is in a sub-ideal place, I'll be very happy indeed :) |
any follow-up? |
there's a discussion going on in the Node.js WG Slack on #experimental-policies that you might want to jump on |
closing the issue since there is an ongoing permission model work here: #791
|
In the the security-wg Slack, we've been discussing what policies in Node core might look like: https://nodejs-security-wg.slack.com/archives/C9KTR110F/p1529928028000444
This discussion largely started due to Ryan Dahl's talk at JSConf EU 2018, where he gives an example that "your linter shouldn't get complete access to your computer and network".
In the Slack discussion, we talked about different kinds of policies, and different attacker models. @brycebaril from NodeSource talked about some of the policies they offer, and mentioned that he'd be interested in these coarse-grained policies being implemented in core.
I also chimed in with some thoughts about policies and attacker models, since this is mostly what we do at Intrinsic.
This is all very speculative, but moving forward, there's been interest from other members of the group in further exploring this concept.
I think it's reasonable to start the discussion with very coarse-grained policies (e.g., does this Node process get to use the network or not?). We'll need to decide the list of policies we'd like to support. We'll need to decide if we're defending against well-meaning, yet buggy code, or actively malicious code. And depending on that answer, we'll have lots of details to work through (e.g., if you turn off networking, are you still allowed to spawn child processes that might use the network?). And finally, once we know what we'd like to build, we can figure out if it's feasible.
The text was updated successfully, but these errors were encountered: