Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Moving all security group rules to separate resources #63

Merged
merged 2 commits into from
Aug 2, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 11 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@ Why not use a Kubernetes or other current cluster? For this, I can name a few re

- After Flatcar Container Linux [release 2905.2.0](https://kinvolk.io/flatcar-container-linux/releases/#release-2905.2.0) Vault cluster stop working due [rkt](https://www.openshift.com/learn/topics/rkt) deprecated, so all Vault module tags up to `v0.1.8` stopped work, more [#48](https://github.com/binlab/terraform-aws-vault-ha-raft/issues/48). Please update module to latest version and check all the latest changes to compatibility with your configuration.
- From **August 1, 2021** `Flatcar Container Linux (Stable)` by owner `075585003325` was removed from AMI public images and replaced by **AMI Marketplace** from owner `679593333241`. This means all previous tags and Terraform code stopped work and need to update module or configure **AMI** manualy by [ami_image](https://github.com/binlab/terraform-aws-vault-ha-raft#input_ami_image). More about issue [#60](https://github.com/binlab/terraform-aws-vault-ha-raft/issues/60), [#61](https://github.com/binlab/terraform-aws-vault-ha-raft/pull/61)
- For updating **Vault** module to version `0.2.x` need some manual work, as there are some breaking changes, read more [#61](https://github.com/binlab/terraform-aws-vault-ha-raft/pull/61), [#63](https://github.com/binlab/terraform-aws-vault-ha-raft/pull/63)


## AWS Permissions
Expand Down Expand Up @@ -267,8 +268,15 @@ No modules.
| [aws_route_table_association.public](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route_table_association) | resource |
| [aws_security_group.alb](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group) | resource |
| [aws_security_group.node](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group) | resource |
| [aws_security_group.public](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group) | resource |
| [aws_security_group.vpc](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group) | resource |
| [aws_security_group_rule.alb_egress_allow_nodes](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule) | resource |
| [aws_security_group_rule.alb_ingress_allow_clients](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule) | resource |
| [aws_security_group_rule.alb_ingress_allow_nodes](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule) | resource |
| [aws_security_group_rule.node_egress_allow_all](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule) | resource |
| [aws_security_group_rule.node_ingress_allow_alb](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule) | resource |
| [aws_security_group_rule.node_ingress_allow_peer](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule) | resource |
| [aws_security_group_rule.node_ingress_allow_public_http](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule) | resource |
| [aws_security_group_rule.node_ingress_allow_public_ssh](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule) | resource |
| [aws_security_group_rule.node_ingress_allow_ssh](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/security_group_rule) | resource |
| [aws_subnet.private](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/subnet) | resource |
| [aws_subnet.public](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/subnet) | resource |
| [aws_volume_attachment.node](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/volume_attachment) | resource |
Expand Down Expand Up @@ -381,10 +389,10 @@ No modules.
| <a name="output_cluster_url"></a> [cluster\_url](#output\_cluster\_url) | Cluster public URL with schema, domain, and port.<br>All parameters depend on inputs values and calculated automatically <br>for convenient use. Can be created separately outside a module |
| <a name="output_igw_public_ips"></a> [igw\_public\_ips](#output\_igw\_public\_ips) | List of Internet public IPs. If cluster nodes are determined to be <br>in the public subnet (Internet Gateway used) all external network <br>requests will be via public IPs assigned to the nodes. This list <br>can be used for configuring security groups of related services or <br>connect to the nodes via SSH on debugging |
| <a name="output_nat_public_ips"></a> [nat\_public\_ips](#output\_nat\_public\_ips) | NAT public IPs assigned as an external IP for requests from <br>each of the nodes. Convenient to use for restrict application, <br>audit logs, some security groups, or other IP-based security <br>policies. Note: if set "node\_allow\_public" each node will get <br>its own public IP which will be used for external requests.<br>If `var.nat_enabled` set to `false` returns an empty list. |
| <a name="output_node_security_group"></a> [node\_security\_group](#output\_node\_security\_group) | Node Security Group ID which allow connecting to "cluster\_port", <br>"node\_port" and "ssh\_port". Useful for debugging when Bastion host <br>connected to the same VPC |
| <a name="output_private_subnets"></a> [private\_subnets](#output\_private\_subnets) | List of Private Subnet IDs created in a module and associated with it. <br>Under the hood is using "NAT Gateway" to external connections for the <br>"Route 0.0.0.0/0". When variable "node\_allow\_public" = false, this <br>network assigned to the instancies. For other cases, this useful to <br>assign another resource in this VPS for example Database which can <br>work behind a NAT (or without NAT at all and external connections <br>for security reasons) and not needs to be exposed publicly by own IP. |
| <a name="output_public_subnets"></a> [public\_subnets](#output\_public\_subnets) | List of Public Subnet IDs created in a module and associated with it. <br>Under the hood is using "Internet Gateway" to external connections <br>for the "Route 0.0.0.0/0". When variable "node\_allow\_public" = true, <br>this network assigned to the instancies. For other cases this useful <br>to assign another resource in this VPS for example Bastion host which <br>need to be exposed publicly by own IP and not behind a NAT. |
| <a name="output_route_table"></a> [route\_table](#output\_route\_table) | Route Table ID assigned to the current Vault HA cluster subnet. <br>Depends on which subnetwork assigned to instances Private or Public. |
| <a name="output_ssh_private_key"></a> [ssh\_private\_key](#output\_ssh\_private\_key) | SSH private key which generated by module and its public key <br>part assigned to each of nodes. Don't recommended do this as <br>a private key will be kept open and stored in a state file. <br>Instead of this set variable "ssh\_authorized\_keys". Please note, <br>if "ssh\_authorized\_keys" set "ssh\_private\_key" return empty output |
| <a name="output_vpc_id"></a> [vpc\_id](#output\_vpc\_id) | VPC ID created in a module and associated with it. Need to be exposed <br>for assigning other resources to the same VPC or for configuration a <br>peering connections. If configured `vpc_id_external` will return it |
| <a name="output_vpc_security_group"></a> [vpc\_security\_group](#output\_vpc\_security\_group) | VPC Security Group ID which allow connecting to "cluster\_port", <br>"node\_port" and "ssh\_port". Useful for debugging when Bastion host <br>connected to the same VPC |
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
5 changes: 1 addition & 4 deletions alb.tf
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,7 @@ resource "aws_lb" "cluster" {
name = format(local.name_tmpl, "alb")
internal = false
load_balancer_type = "application"
security_groups = [
aws_security_group.vpc.id,
aws_security_group.alb.id
]
security_groups = [aws_security_group.alb.id]

dynamic "subnet_mapping" {
for_each = [for value in aws_subnet.public : value.id]
Expand Down
6 changes: 1 addition & 5 deletions ec2.tf
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,7 @@ resource "aws_instance" "node" {
: element([for value in aws_subnet.private : value.id], count.index)
)

vpc_security_group_ids = compact([
aws_security_group.vpc.id,
aws_security_group.node.id,
var.node_allow_public ? aws_security_group.public[0].id : "",
])
vpc_security_group_ids = [aws_security_group.node.id]

tags = merge(local.tags, {
Name = format(local.name_tmpl, format("node%d", count.index))
Expand Down
6 changes: 4 additions & 2 deletions examples/development-debugging-sandbox/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,13 @@ module "bastion" {
stack = "vault-debug"
vpc_id = module.vault.vpc_id
vpc_subnet_id = module.vault.public_subnets[0]
security_groups = [module.vault.vpc_security_group]
security_groups = [module.vault.node_security_group]
ec2_ssh_cidr = ["0.0.0.0/0"]
bastion_ssh_cidr = ["0.0.0.0/0"]
ec2_ssh_auth_keys = [data.local_file.ssh_public_key.content]
bastion_ssh_auth_keys = [data.local_file.ssh_public_key.content]

ami_image = "ami-0ad034613130b6344"
}


Expand Down Expand Up @@ -48,5 +50,5 @@ module "vault" {
debug_path = format("%s/.debug", path.module)
docker_tag = "1.8.0"

# ami_image = "ami-0bb5fc1412bbbb988"
# ami_image = "ami-0ad034613130b6344"
}
6 changes: 3 additions & 3 deletions outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -32,13 +32,13 @@ output "private_subnets" {
value = [for value in aws_subnet.private : value.id]
}

output "vpc_security_group" {
output "node_security_group" {
description = <<-EOT
VPC Security Group ID which allow connecting to "cluster_port",
Node Security Group ID which allow connecting to "cluster_port",
"node_port" and "ssh_port". Useful for debugging when Bastion host
connected to the same VPC
EOT
value = aws_security_group.vpc.id
value = aws_security_group.node.id
}

output "route_table" {
Expand Down
198 changes: 101 additions & 97 deletions security.tf
Original file line number Diff line number Diff line change
@@ -1,121 +1,125 @@
resource "aws_security_group" "vpc" {
name = format(local.name_tmpl, "vpc")
description = "Internal VPC Traffic"
vpc_id = local.vpc_id

tags = merge(local.tags, {
Description = "Internal VPC Traffic"
Name = format(local.name_tmpl, "vpc")
})
}
########################################################################
# ALB Security Group and Rules #
########################################################################

resource "aws_security_group" "alb" {
name = format(local.name_tmpl, "alb")
description = "Allow Public Inbound Traffic to ALB"
description = "Vault HA Cluster ALB"
vpc_id = local.vpc_id

ingress {
description = "Allow Public Clients Connection to Vault"
from_port = var.cluster_port
to_port = var.cluster_port
protocol = "tcp"
cidr_blocks = var.cluster_allowed_subnets
}

ingress {
description = "Allow Inbound Traffic from Nodes"
from_port = var.cluster_port
to_port = var.cluster_port
protocol = "tcp"
cidr_blocks = []
security_groups = [aws_security_group.vpc.id]
}

egress {
from_port = var.node_port
to_port = var.node_port
protocol = "tcp"
cidr_blocks = []
security_groups = [aws_security_group.vpc.id]
}

tags = merge(local.tags, {
Description = "Allow Public Inbound Traffic to ALB"
Description = "Vault HA Cluster ALB"
Name = format(local.name_tmpl, "alb")
})
}

resource "aws_security_group_rule" "alb_ingress_allow_clients" {
description = "Allow Clients Inbound Traffic from Public"
type = "ingress"
from_port = var.cluster_port
to_port = var.cluster_port
protocol = "tcp"
cidr_blocks = var.cluster_allowed_subnets
security_group_id = aws_security_group.alb.id
}

resource "aws_security_group_rule" "alb_ingress_allow_nodes" {
description = "Allow Nodes Inbound Traffic from Cluster"
type = "ingress"
from_port = var.cluster_port
to_port = var.cluster_port
protocol = "tcp"
source_security_group_id = aws_security_group.node.id
security_group_id = aws_security_group.alb.id
}

resource "aws_security_group_rule" "alb_egress_allow_nodes" {
description = "Allow Health Check Outbound Traffic to Nodes"
type = "egress"
from_port = var.node_port
to_port = var.node_port
protocol = "tcp"
source_security_group_id = aws_security_group.node.id
security_group_id = aws_security_group.alb.id
}

########################################################################
# Node Security Group and Rules #
########################################################################

resource "aws_security_group" "node" {
name = format(local.name_tmpl, "node")
description = "Allow ALB Inbound Traffic"
description = "Vault HA Cluster Node"
vpc_id = local.vpc_id

ingress {
description = "Allow Health Check from ALB"
from_port = var.node_port
to_port = var.node_port
protocol = "tcp"
cidr_blocks = []
security_groups = [aws_security_group.alb.id]
}

ingress {
description = "Allow Cluster Inbound Traffic"
from_port = var.node_port
to_port = var.peer_port
protocol = "tcp"
cidr_blocks = []
security_groups = [aws_security_group.vpc.id]
}

ingress {
description = "Allow SSH Connection from self VPC Security Group"
from_port = var.ssh_port
to_port = var.ssh_port
protocol = "tcp"
cidr_blocks = []
security_groups = [aws_security_group.vpc.id]
}

egress {
description = "Allow All Outbound Traffic"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags = merge(local.tags, {
Description = "Allow ALB Inbound Traffic"
Description = "Vault HA Cluster Node"
Name = format(local.name_tmpl, "node")
})
}

resource "aws_security_group" "public" {
resource "aws_security_group_rule" "node_ingress_allow_alb" {
description = "Allow Health Check Inbound Traffic from ALB"
type = "ingress"
from_port = var.node_port
to_port = var.node_port
protocol = "tcp"
source_security_group_id = aws_security_group.alb.id
security_group_id = aws_security_group.node.id
}

resource "aws_security_group_rule" "node_ingress_allow_peer" {
for_each = { for i, value in [var.node_port, var.peer_port] : i => value }

description = "Allow Peer Inbound Traffic from Self SG"
type = "ingress"
from_port = each.value
to_port = each.value
protocol = "tcp"
self = true
security_group_id = aws_security_group.node.id
}

resource "aws_security_group_rule" "node_ingress_allow_ssh" {
description = "Allow SSH Inbound Traffic from Self SG"
type = "ingress"
from_port = var.ssh_port
to_port = var.ssh_port
protocol = "tcp"
self = true
security_group_id = aws_security_group.node.id
}

resource "aws_security_group_rule" "node_egress_allow_all" {
description = "Allow All Outbound Traffic"
type = "egress"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = aws_security_group.node.id
}

resource "aws_security_group_rule" "node_ingress_allow_public_http" {
count = var.node_allow_public ? 1 : 0

name = format(local.name_tmpl, "public")
description = "Allow EC2 Instacies Public"
vpc_id = local.vpc_id
description = "Allow Public HTTP Inbound Traffic to Nodes"
type = "ingress"
from_port = var.node_port
to_port = var.node_port
protocol = "tcp"
cidr_blocks = var.node_allowed_subnets
security_group_id = aws_security_group.node.id
}

ingress {
description = "Allow Public HTTP Connection to Vault on EC2"
from_port = var.node_port
to_port = var.node_port
protocol = "tcp"
cidr_blocks = var.node_allowed_subnets
}

ingress {
description = "Allow Public SSH Connection to Vault on EC2"
from_port = var.ssh_port
to_port = var.ssh_port
protocol = "tcp"
cidr_blocks = var.ssh_allowed_subnets
}
resource "aws_security_group_rule" "node_ingress_allow_public_ssh" {
count = var.node_allow_public ? 1 : 0

tags = merge(local.tags, {
Description = "Allow EC2 Instacies Public"
Name = format(local.name_tmpl, "public")
})
description = "Allow Public SSH Inbound Traffic to Nodes"
type = "ingress"
from_port = var.ssh_port
to_port = var.ssh_port
protocol = "tcp"
cidr_blocks = var.ssh_allowed_subnets
security_group_id = aws_security_group.node.id
}