-
Notifications
You must be signed in to change notification settings - Fork 262
modules/aws/vpc - Better security for master nodes #2147
Conversation
|
Can one of the admins verify this patch? |
1 similar comment
|
Can one of the admins verify this patch? |
|
@mrwacky42 IMO this is actually not a good idea. Connections via an ELB are understandably loadbalanced across all the pool nodes. Thus, SSHing through a loadbalancer typically results in broken SSH connections after a few minutes. You should always SSH either:
We intend to add support for the second option very soon to address your concerns. Closing for now. |
|
@squat - I think you misunderstand the purpose of this PR, or I misunderstand your response. Currently, any IP on the entire internet can connect to the master public IP addresses. We see lots of attempts at SSH brute forcing, and I can assume others are attempting to attack the API servers on port 80/443. As far as I can tell, nothing external to the cluster talks directly to port 80/443 on the master nodes. This change limits the security group to only the VPC CIDR network. This grants access to:
We have configured our test cluster this way to reduce our attack surface. |
|
Ah sorry rereading this now I see that I misunderstood this PR as limiting SSH, rather than http/https. Sure this could definitely make sense. However, the ELB forwards all requests to the API servers, so how much better is this really? |
|
Can one of the admins verify this patch? |
1 similar comment
|
Can one of the admins verify this patch? |
|
It's better because we limit access to the API with security groups attached to the ELB! |
|
We did some changes (#2082) to the testing process. Please rebase on to current master, so that the |
For a VPC with master nodes in a public subnet, they are reachable from the entire internet. Usually, one will communicate with them via an ELB, and not directly. This change limits access to 80/443 TCP and ICMP to only nodes within the VPC and cluster.
af99afc to
e7bd29a
Compare
|
rebased |
|
retest this please |
1 similar comment
|
retest this please |
|
retest this please |
And in the UPI CloudFormation templates too. We've allowed ICMP ingress for OpenStack since 6f76298 (OpenStack prototype, 2017-02-16, coreos/tectonic-installer#1), which did not motivate the ICMP ingress. Allowing ICMP ingress for AWS dates back to b620c16 (modules/aws: tighten security groups, 2017-04-19, coreos/tectonic-installer#264). The master rule was restricted to the VPC in e7bd29a (modules/aws/vpc - Better security for master nodes, 2017-10-16, coreos/tectonic-installer#2147). And the worker rules was restricted to the VPC in e131a74 (aws: fix ICMP ACL, 2019-04-08, openshift#1550), before which a typo had blocked all ICMP ingress. There are reasons to allow in-cluster ICMP, including Path MTU Discovery (PMTUD) [1,2,3]. Folks also use ping to troubleshoot connectivity [4]. Restricting this to in-cluster security groups will avoid exposing ICMP ports to siblings living in shared VPCs, as we move towards allowing the installer to launch clusters in a pre-existing VPC. It might also block ICMP ingress from our load balancers, where we probably want PMTUD and possibly other ICMP calls. I'm not sure if there's a convenient way to allow access from the load-balancers while excluding it from sibling clusters that share the same VPC, but this commit is my attempt to get that. [1]: http://shouldiblockicmp.com/ [2]: https://tools.ietf.org/html/rfc1191 [3]: https://tools.ietf.org/html/rfc8201 [4]: https://bugzilla.redhat.com/show_bug.cgi?id=1689857#c2
For a VPC with master nodes in a public subnet, they are reachable
from the entire internet. Usually, one will communicate with them via
an ELB, and not directly. This change limits access to 80/443 TCP and
ICMP to only nodes within the VPC and cluster.