-
-
Notifications
You must be signed in to change notification settings - Fork 237
Feature/iac scaffolding : Foundational Network Module #2310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 3 commits
d4c5e25
0709c6f
60f9aab
8ca9bcb
4711c5c
6b598bf
2caacc7
2f44956
bca75ce
a0706b0
8acec84
9a5d3bb
ad53277
58c2f3c
6621967
b2a6aea
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,146 @@ | ||
| # OWASP Nest - AWS Infrastructure Operational Guide | ||
|
|
||
| This document contains the complete operational guide for deploying and managing the OWASP Nest application infrastructure on AWS using Terraform. The project is designed to be modular, reusable, and secure, following industry best practices for managing infrastructure in a collaborative, open-source environment. | ||
|
|
||
| ## Project Overview | ||
|
|
||
| This Terraform setup provisions a multi-environment (dev, staging, prod) infrastructure for the OWASP Nest application. It leverages a modular design to manage networking, compute, data, and storage resources independently. | ||
|
|
||
| - **Environments:** Code is organized under `environments/` to provide strong isolation between `dev`, `staging`, and `production`. | ||
| - **Modules:** Reusable components are defined in `modules/` for consistency and maintainability. | ||
| - **State Management:** Terraform state is stored remotely in an S3 bucket. State locking is managed by DynamoDB to prevent conflicts and ensure safe, concurrent operations by multiple contributors. | ||
| - **Security:** All sensitive data is managed via AWS Secrets Manager. Network access is restricted using a least-privilege security group model. | ||
|
|
||
| ## Phased Rollout Plan | ||
|
|
||
| The infrastructure will be built and deployed in a series of focused Pull Requests to ensure each foundational layer is stable and well-reviewed before building on top of it. | ||
|
|
||
| 1. **Phase 1: Foundational Networking (`modules/network`)** | ||
| 2. **Phase 2: Data & Storage Tiers (`modules/database`, `modules/storage`, `modules/cache`)** | ||
| 3. **Phase 3: Compute & IAM (`modules/compute`, `modules/iam`)** | ||
|
|
||
| This document will be updated as each phase is completed. | ||
|
|
||
| ## Prerequisites | ||
|
|
||
| Before you begin, ensure you have the following tools installed and configured: | ||
|
|
||
| 1. **Terraform:** [Install Terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli) (Version 1.3.0 or newer recommended). | ||
| 2. **AWS CLI:** [Install and configure the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html). You must have an active AWS profile configured with credentials that have sufficient permissions to create the resources. | ||
| 3. **pre-commit:** [Install pre-commit](https://pre-commit.com/#installation) (`pip install pre-commit`). | ||
|
|
||
| ## Initial Setup (One-Time) | ||
|
|
||
| This infrastructure requires an S3 bucket and a DynamoDB table for managing Terraform's remote state. These must be created manually before you can run `terraform init`. | ||
|
|
||
| **Note:** The following commands are examples. Please ensure the chosen S3 bucket name is globally unique. | ||
|
|
||
| 1. **Define Environment Variables (Recommended):** | ||
| ```bash | ||
| # Run these in your terminal to make the next steps easier | ||
| export AWS_REGION="us-east-1" # Or your preferred AWS region | ||
| export TF_STATE_BUCKET="owasp-nest-tfstate-$(aws sts get-caller-identity --query Account --output text)" # Creates a unique bucket name | ||
| export TF_STATE_LOCK_TABLE="owasp-nest-tf-locks" | ||
| ``` | ||
|
|
||
| 2. **Create the S3 Bucket for Terraform State:** | ||
| *This bucket will store the `.tfstate` file, which is Terraform's map of your infrastructure.* | ||
| ```bash | ||
| aws s3api create-bucket \ | ||
| --bucket ${TF_STATE_BUCKET} \ | ||
| --region ${AWS_REGION} \ | ||
| --create-bucket-configuration LocationConstraint=${AWS_REGION} | ||
|
|
||
| aws s3api put-bucket-versioning \ | ||
| --bucket ${TF_STATE_BUCKET} \ | ||
| --versioning-configuration Status=Enabled | ||
| ``` | ||
coderabbitai[bot] marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| 3. **Create the DynamoDB Table for State Locking:** | ||
| *This table prevents multiple people from running `terraform apply` at the same time, which could corrupt the state file.* | ||
| ```bash | ||
| aws dynamodb create-table \ | ||
| --table-name ${TF_STATE_LOCK_TABLE} \ | ||
| --attribute-definitions AttributeName=LockID,AttributeType=S \ | ||
| --key-schema AttributeName=LockID,KeyType=HASH \ | ||
| --billing-mode PAY_PER_REQUEST \ | ||
| --region ${AWS_REGION} | ||
| ``` | ||
| *(Note: Switched to `PAY_PER_REQUEST` billing mode, which is more cost-effective for the infrequent use of a lock table).* | ||
|
|
||
| 4. **Install Pre-commit Hooks:** | ||
| *From the root of the repository, run this once to set up the automated code quality checks.* | ||
| ```bash | ||
| pre-commit install | ||
| ``` | ||
|
|
||
| ## Secret Population | ||
|
|
||
| This project provisions placeholders for secrets in AWS Secrets Manager but does **not** populate them with values. You must do this manually for each environment. | ||
|
|
||
| 1. Navigate to the **AWS Secrets Manager** console in the correct region. | ||
| 2. Find the secrets created by Terraform (e.g., `owasp-nest/dev/AppSecrets`, `owasp-nest/dev/DbCredentials`). | ||
| 3. Click on a secret and choose **"Retrieve secret value"**. | ||
| 4. Click **"Edit"** and populate the secret values. | ||
|
|
||
| - **For `DbCredentials`:** Use the "Plaintext" tab and create a JSON structure. Terraform will automatically generate a strong password, but you can override it here if needed. | ||
| ```json | ||
| { | ||
| "username": "nestadmin", | ||
| "password": "a-very-strong-and-long-password" | ||
| } | ||
| ``` | ||
|
|
||
| - **For `AppSecrets`:** Use the "Plaintext" tab and create a key/value JSON structure for all required application secrets. | ||
| ```json | ||
| { | ||
| "DJANGO_SECRET_KEY": "generate-a-strong-random-key-here", | ||
| "DJANGO_ALGOLIA_WRITE_API_KEY": "your-algolia-key", | ||
| "NEXT_PUBLIC_SENTRY_DSN": "your-sentry-dsn", | ||
| "GITHUB_TOKEN": "your-github-token" | ||
| } | ||
| ``` | ||
| 5. Save the secret. Repeat for all required secrets and all environments. | ||
|
|
||
| ## Deployment Workflow | ||
|
|
||
| To deploy an environment, navigate to its directory and run the standard Terraform workflow. | ||
|
|
||
| 1. **Navigate to the Environment Directory:** | ||
| ```bash | ||
| cd terraform/environments/dev | ||
| ``` | ||
|
|
||
|
||
| 2. **Create a `terraform.tfvars` file:** | ||
| *Copy the example file. This file is where you will customize variables for the environment.* | ||
| ```bash | ||
| cp terraform.tfvars.example terraform.tfvars | ||
| # Now edit terraform.tfvars with your specific values (e.g., your AWS account ID, desired region). | ||
| ``` | ||
|
|
||
| 3. **Initialize Terraform:** | ||
| *This downloads the necessary providers and configures the S3 backend.* | ||
| ```bash | ||
| terraform init | ||
| ``` | ||
|
|
||
| 4. **Plan the Deployment:** | ||
| *This creates an execution plan and shows you what changes will be made. Always review this carefully.* | ||
| ```bash | ||
| terraform plan | ||
| ``` | ||
|
|
||
| 5. **Apply the Changes:** | ||
| *This provisions the infrastructure on AWS. You will be prompted to confirm.* | ||
| ```bash | ||
| terraform apply | ||
| ``` | ||
|
|
||
| ## Module Overview | ||
|
|
||
| - **`modules/network`**: Creates the foundational networking layer, including the VPC, subnets, NAT Gateway, and Application Load Balancer. | ||
| - **`modules/database`**: Provisions the AWS RDS for PostgreSQL instance, including its subnet group and security group. | ||
| - **`modules/cache`**: Provisions the AWS ElastiCache for Redis cluster. | ||
| - **`modules/storage`**: Creates the S3 buckets for public static assets and private media uploads, configured with secure defaults. | ||
| - **`modules/compute`**: Provisions all compute resources: the ECS Fargate service for the frontend, the EC2 instance for cron jobs, and the necessary IAM roles and security groups for all services. It also configures the ALB routing rules. | ||
| - **`modules/iam`**: (Future) A dedicated module for creating the various IAM roles. | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,227 @@ | ||
| # VPC and Core Networking | ||
|
|
||
| resource "aws_vpc" "main" { | ||
| cidr_block = var.vpc_cidr | ||
| enable_dns_support = true | ||
| enable_dns_hostnames = true | ||
|
|
||
| tags = merge( | ||
| var.tags, | ||
| { | ||
| Name = "${var.project_prefix}-${var.environment}-vpc" | ||
| } | ||
| ) | ||
| } | ||
|
|
||
| resource "aws_internet_gateway" "main" { | ||
| vpc_id = aws_vpc.main.id | ||
|
|
||
| tags = merge( | ||
| var.tags, | ||
| { | ||
| Name = "${var.project_prefix}-${var.environment}-igw" | ||
| } | ||
| ) | ||
| } | ||
|
|
||
| # Subnets | ||
|
|
||
| # Deploys a public and private subnet into each specified Availability Zone. | ||
|
|
||
| resource "aws_subnet" "public" { | ||
| count = length(var.public_subnet_cidrs) | ||
| vpc_id = aws_vpc.main.id | ||
| cidr_block = var.public_subnet_cidrs[count.index] | ||
| availability_zone = var.availability_zones[count.index] | ||
| map_public_ip_on_launch = true | ||
|
|
||
| tags = merge( | ||
| var.tags, | ||
| { | ||
| Name = "${var.project_prefix}-${var.environment}-public-subnet-${var.availability_zones[count.index]}" | ||
| } | ||
| ) | ||
| } | ||
|
|
||
| resource "aws_subnet" "private" { | ||
| count = length(var.private_subnet_cidrs) | ||
| vpc_id = aws_vpc.main.id | ||
| cidr_block = var.private_subnet_cidrs[count.index] | ||
| availability_zone = var.availability_zones[count.index] | ||
|
|
||
| tags = merge( | ||
| var.tags, | ||
| { | ||
| Name = "${var.project_prefix}-${var.environment}-private-subnet-${var.availability_zones[count.index]}" | ||
| } | ||
| ) | ||
| } | ||
|
|
||
| # Routing and NAT Gateway for Private Subnets | ||
|
|
||
| # We create a SINGLE NAT Gateway and a SINGLE private route table. This is more | ||
| # resilient, cost-effective, and simpler to manage than a per-AZ NAT Gateway. | ||
|
|
||
coderabbitai[bot] marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| resource "aws_eip" "nat" { | ||
| # Only one EIP is needed for the single NAT Gateway. | ||
| tags = merge( | ||
| var.tags, | ||
| { | ||
| Name = "${var.project_prefix}-${var.environment}-nat-eip" | ||
| } | ||
| ) | ||
| } | ||
nishkersh marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| resource "aws_nat_gateway" "main" { | ||
| # Only one NAT Gateway, placed in the first public subnet for simplicity. | ||
| # As AWS automatically handles failover at the infrastructure level. | ||
| allocation_id = aws_eip.nat.id | ||
| subnet_id = aws_subnet.public[0].id | ||
|
|
||
coderabbitai[bot] marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| tags = merge( | ||
| var.tags, | ||
| { | ||
| Name = "${var.project_prefix}-${var.environment}-nat-gw" | ||
| } | ||
| ) | ||
|
|
||
| depends_on = [aws_internet_gateway.main] | ||
| } | ||
|
|
||
| # A single route table for all public subnets. | ||
| resource "aws_route_table" "public" { | ||
| vpc_id = aws_vpc.main.id | ||
|
|
||
| route { | ||
| cidr_block = "0.0.0.0/0" | ||
| gateway_id = aws_internet_gateway.main.id | ||
| } | ||
|
|
||
| tags = merge( | ||
| var.tags, | ||
| { | ||
| Name = "${var.project_prefix}-${var.environment}-public-rt" | ||
| } | ||
| ) | ||
| } | ||
|
|
||
| # Associating the single public route table with all public subnets. | ||
| resource "aws_route_table_association" "public" { | ||
| count = length(aws_subnet.public) | ||
| subnet_id = aws_subnet.public[count.index].id | ||
| route_table_id = aws_route_table.public.id | ||
| } | ||
|
|
||
| # A single route table for ALL private subnets, pointing to the single NAT Gateway. | ||
| resource "aws_route_table" "private" { | ||
| vpc_id = aws_vpc.main.id | ||
|
|
||
| route { | ||
| cidr_block = "0.0.0.0/0" | ||
| nat_gateway_id = aws_nat_gateway.main.id | ||
| } | ||
|
|
||
| tags = merge( | ||
| var.tags, | ||
| { | ||
| Name = "${var.project_prefix}-${var.environment}-private-rt" | ||
| } | ||
| ) | ||
| } | ||
|
|
||
| # Associate the single private route table with all private subnets. | ||
| resource "aws_route_table_association" "private" { | ||
| count = length(aws_subnet.private) | ||
| subnet_id = aws_subnet.private[count.index].id | ||
| route_table_id = aws_route_table.private.id | ||
| } | ||
|
|
||
| # Application Load Balancer | ||
|
|
||
| resource "aws_security_group" "alb" { | ||
| name = "${var.project_prefix}-${var.environment}-alb-sg" | ||
| description = "Controls access to the ALB" | ||
| vpc_id = aws_vpc.main.id | ||
|
|
||
|
|
||
| ingress { | ||
| protocol = "tcp" | ||
| from_port = 80 | ||
| to_port = 80 | ||
| cidr_blocks = ["0.0.0.0/0"] | ||
| description = "Allow HTTP traffic from anywhere for HTTPS redirection" | ||
| } | ||
|
|
||
| ingress { | ||
| protocol = "tcp" | ||
| from_port = 443 | ||
| to_port = 443 | ||
| cidr_blocks = ["0.0.0.0/0"] | ||
| description = "Allow HTTPS traffic from anywhere" | ||
| } | ||
|
|
||
| egress { | ||
| protocol = "-1" | ||
| from_port = 0 | ||
| to_port = 0 | ||
| cidr_blocks = ["0.0.0.0/0"] | ||
| description = "Allow all outbound traffic" | ||
| } | ||
|
|
||
| tags = merge( | ||
| var.tags, | ||
| { | ||
| Name = "${var.project_prefix}-${var.environment}-alb-sg" | ||
| } | ||
| ) | ||
| } | ||
|
|
||
| resource "aws_lb" "main" { | ||
| name = "${var.project_prefix}-${var.environment}-alb" | ||
| internal = false | ||
| load_balancer_type = "application" | ||
| security_groups = [aws_security_group.alb.id] | ||
| subnets = aws_subnet.public[*].id | ||
|
|
||
| # Deletion protection should be enabled via a variable for production. | ||
| enable_deletion_protection = var.environment == "prod" ? true : false | ||
|
|
||
| tags = merge( | ||
| var.tags, | ||
| { | ||
| Name = "${var.project_prefix}-${var.environment}-alb" | ||
| } | ||
| ) | ||
| } | ||
nishkersh marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| resource "aws_lb_listener" "http" { | ||
| load_balancer_arn = aws_lb.main.arn | ||
| port = 80 | ||
| protocol = "HTTP" | ||
|
|
||
| default_action { | ||
| type = "redirect" | ||
| redirect { | ||
| port = "443" | ||
| protocol = "HTTPS" | ||
| status_code = "HTTP_301" | ||
| } | ||
| } | ||
| } | ||
|
|
||
| resource "aws_lb_listener" "https" { | ||
| load_balancer_arn = aws_lb.main.arn | ||
| port = 443 | ||
| protocol = "HTTPS" | ||
| ssl_policy = "ELBSecurityPolicy-2016-08" | ||
| certificate_arn = var.acm_certificate_arn | ||
|
|
||
coderabbitai[bot] marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| default_action { | ||
| type = "fixed-response" | ||
| fixed_response { | ||
| content_type = "text/plain" | ||
| message_body = "404: Not Found. No listener rule configured for this path." | ||
| status_code = "404" | ||
| } | ||
| } | ||
| } | ||
Uh oh!
There was an error while loading. Please reload this page.