Skip to content
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
Show all changes
16 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -86,3 +86,15 @@ repos:
rev: v2.6.0
hooks:
- id: pyproject-fmt

- repo: https://github.com/antonbabenko/pre-commit-terraform
rev: v1.92.0
hooks:
- id: terraform_fmt
files: \.tf$
- id: terraform_validate
files: \.tf$
- id: terraform_tflint
files: \.tf$
- id: terraform_trivy
files: \.tf$
146 changes: 146 additions & 0 deletions Terraform/Operational-Guide.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,146 @@
# OWASP Nest - AWS Infrastructure Operational Guide

This document contains the complete operational guide for deploying and managing the OWASP Nest application infrastructure on AWS using Terraform. The project is designed to be modular, reusable, and secure, following industry best practices for managing infrastructure in a collaborative, open-source environment.

## Project Overview

This Terraform setup provisions a multi-environment (dev, staging, prod) infrastructure for the OWASP Nest application. It leverages a modular design to manage networking, compute, data, and storage resources independently.

- **Environments:** Code is organized under `environments/` to provide strong isolation between `dev`, `staging`, and `production`.
- **Modules:** Reusable components are defined in `modules/` for consistency and maintainability.
- **State Management:** Terraform state is stored remotely in an S3 bucket. State locking is managed by DynamoDB to prevent conflicts and ensure safe, concurrent operations by multiple contributors.
- **Security:** All sensitive data is managed via AWS Secrets Manager. Network access is restricted using a least-privilege security group model.

## Phased Rollout Plan

The infrastructure will be built and deployed in a series of focused Pull Requests to ensure each foundational layer is stable and well-reviewed before building on top of it.

1. **Phase 1: Foundational Networking (`modules/network`)**
2. **Phase 2: Data & Storage Tiers (`modules/database`, `modules/storage`, `modules/cache`)**
3. **Phase 3: Compute & IAM (`modules/compute`, `modules/iam`)**

This document will be updated as each phase is completed.

## Prerequisites

Before you begin, ensure you have the following tools installed and configured:

1. **Terraform:** [Install Terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli) (Version 1.3.0 or newer recommended).
2. **AWS CLI:** [Install and configure the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html). You must have an active AWS profile configured with credentials that have sufficient permissions to create the resources.
3. **pre-commit:** [Install pre-commit](https://pre-commit.com/#installation) (`pip install pre-commit`).

## Initial Setup (One-Time)

This infrastructure requires an S3 bucket and a DynamoDB table for managing Terraform's remote state. These must be created manually before you can run `terraform init`.

**Note:** The following commands are examples. Please ensure the chosen S3 bucket name is globally unique.

1. **Define Environment Variables (Recommended):**
```bash
# Run these in your terminal to make the next steps easier
export AWS_REGION="us-east-1" # Or your preferred AWS region
export TF_STATE_BUCKET="owasp-nest-tfstate-$(aws sts get-caller-identity --query Account --output text)" # Creates a unique bucket name
export TF_STATE_LOCK_TABLE="owasp-nest-tf-locks"
```

2. **Create the S3 Bucket for Terraform State:**
*This bucket will store the `.tfstate` file, which is Terraform's map of your infrastructure.*
```bash
aws s3api create-bucket \
--bucket ${TF_STATE_BUCKET} \
--region ${AWS_REGION} \
--create-bucket-configuration LocationConstraint=${AWS_REGION}

aws s3api put-bucket-versioning \
--bucket ${TF_STATE_BUCKET} \
--versioning-configuration Status=Enabled
```

3. **Create the DynamoDB Table for State Locking:**
*This table prevents multiple people from running `terraform apply` at the same time, which could corrupt the state file.*
```bash
aws dynamodb create-table \
--table-name ${TF_STATE_LOCK_TABLE} \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--billing-mode PAY_PER_REQUEST \
--region ${AWS_REGION}
```
*(Note: Switched to `PAY_PER_REQUEST` billing mode, which is more cost-effective for the infrequent use of a lock table).*

4. **Install Pre-commit Hooks:**
*From the root of the repository, run this once to set up the automated code quality checks.*
```bash
pre-commit install
```

## Secret Population

This project provisions placeholders for secrets in AWS Secrets Manager but does **not** populate them with values. You must do this manually for each environment.

1. Navigate to the **AWS Secrets Manager** console in the correct region.
2. Find the secrets created by Terraform (e.g., `owasp-nest/dev/AppSecrets`, `owasp-nest/dev/DbCredentials`).
3. Click on a secret and choose **"Retrieve secret value"**.
4. Click **"Edit"** and populate the secret values.

- **For `DbCredentials`:** Use the "Plaintext" tab and create a JSON structure. Terraform will automatically generate a strong password, but you can override it here if needed.
```json
{
"username": "nestadmin",
"password": "a-very-strong-and-long-password"
}
```

- **For `AppSecrets`:** Use the "Plaintext" tab and create a key/value JSON structure for all required application secrets.
```json
{
"DJANGO_SECRET_KEY": "generate-a-strong-random-key-here",
"DJANGO_ALGOLIA_WRITE_API_KEY": "your-algolia-key",
"NEXT_PUBLIC_SENTRY_DSN": "your-sentry-dsn",
"GITHUB_TOKEN": "your-github-token"
}
```
5. Save the secret. Repeat for all required secrets and all environments.

## Deployment Workflow

To deploy an environment, navigate to its directory and run the standard Terraform workflow.

1. **Navigate to the Environment Directory:**
```bash
cd terraform/environments/dev
```

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

🧩 Analysis chain

Verify environment directory path/casing.

Doc uses terraform/environments/dev, while repo paths appear under Terraform/. Ensure the path exists and matches case-sensitive filesystems.


🏁 Script executed:

#!/bin/bash
# Discover environment and module paths (case-insensitive) in repo
fd -i -t d '^environments$' -d 4
fd -i -t d '01-Network' -d 6

Length of output: 53


🏁 Script executed:

#!/bin/bash
set -euo pipefail
echo "== Listing Terraform/environments =="
ls -la Terraform/environments || true

echo
echo "== Searching for 'dev' directories under Terraform =="
fd -t d -i dev Terraform -d 4 || true

echo
echo "== Showing Operational-Guide.md around lines 100-120 =="
sed -n '100,120p' Terraform/Operational-Guide.md || true

Length of output: 991


Fix path casing and missing dev environment in docs

Terraform/Operational-Guide.md (lines 111–113) shows cd terraform/environments/dev, but the repo has Terraform/environments/ (only .gitkeep — no dev directory). Update the doc to use the correct path/casing (e.g., cd Terraform/environments/<env>) and either document creating the environment directory or add Terraform/environments/dev/ to the repo.

🤖 Prompt for AI Agents
In Terraform/Operational-Guide.md around lines 111 to 113, the documented path
uses the wrong casing and references a non-existent dev directory; update the
path to use the actual repo casing and a placeholder (e.g., cd
Terraform/environments/<env>) and either add a short step that instructs users
to create the environment directory (mkdir -p Terraform/environments/dev &&
touch Terraform/environments/dev/.gitkeep) or add Terraform/environments/dev/ to
the repository so the example works as written; ensure the text matches the
repository layout and casing consistently.

2. **Create a `terraform.tfvars` file:**
*Copy the example file. This file is where you will customize variables for the environment.*
```bash
cp terraform.tfvars.example terraform.tfvars
# Now edit terraform.tfvars with your specific values (e.g., your AWS account ID, desired region).
```

3. **Initialize Terraform:**
*This downloads the necessary providers and configures the S3 backend.*
```bash
terraform init
```

4. **Plan the Deployment:**
*This creates an execution plan and shows you what changes will be made. Always review this carefully.*
```bash
terraform plan
```

5. **Apply the Changes:**
*This provisions the infrastructure on AWS. You will be prompted to confirm.*
```bash
terraform apply
```

## Module Overview

- **`modules/network`**: Creates the foundational networking layer, including the VPC, subnets, NAT Gateway, and Application Load Balancer.
- **`modules/database`**: Provisions the AWS RDS for PostgreSQL instance, including its subnet group and security group.
- **`modules/cache`**: Provisions the AWS ElastiCache for Redis cluster.
- **`modules/storage`**: Creates the S3 buckets for public static assets and private media uploads, configured with secure defaults.
- **`modules/compute`**: Provisions all compute resources: the ECS Fargate service for the frontend, the EC2 instance for cron jobs, and the necessary IAM roles and security groups for all services. It also configures the ALB routing rules.
- **`modules/iam`**: (Future) A dedicated module for creating the various IAM roles.
Empty file.
Empty file added Terraform/modules/.gitkeep
Empty file.
227 changes: 227 additions & 0 deletions Terraform/modules/01-Network/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,227 @@
# VPC and Core Networking

resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
enable_dns_support = true
enable_dns_hostnames = true

tags = merge(
var.tags,
{
Name = "${var.project_prefix}-${var.environment}-vpc"
}
)
}

resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id

tags = merge(
var.tags,
{
Name = "${var.project_prefix}-${var.environment}-igw"
}
)
}

# Subnets

# Deploys a public and private subnet into each specified Availability Zone.

resource "aws_subnet" "public" {
count = length(var.public_subnet_cidrs)
vpc_id = aws_vpc.main.id
cidr_block = var.public_subnet_cidrs[count.index]
availability_zone = var.availability_zones[count.index]
map_public_ip_on_launch = true

tags = merge(
var.tags,
{
Name = "${var.project_prefix}-${var.environment}-public-subnet-${var.availability_zones[count.index]}"
}
)
}

resource "aws_subnet" "private" {
count = length(var.private_subnet_cidrs)
vpc_id = aws_vpc.main.id
cidr_block = var.private_subnet_cidrs[count.index]
availability_zone = var.availability_zones[count.index]

tags = merge(
var.tags,
{
Name = "${var.project_prefix}-${var.environment}-private-subnet-${var.availability_zones[count.index]}"
}
)
}

# Routing and NAT Gateway for Private Subnets

# We create a SINGLE NAT Gateway and a SINGLE private route table. This is more
# resilient, cost-effective, and simpler to manage than a per-AZ NAT Gateway.

resource "aws_eip" "nat" {
# Only one EIP is needed for the single NAT Gateway.
tags = merge(
var.tags,
{
Name = "${var.project_prefix}-${var.environment}-nat-eip"
}
)
}

resource "aws_nat_gateway" "main" {
# Only one NAT Gateway, placed in the first public subnet for simplicity.
# As AWS automatically handles failover at the infrastructure level.
allocation_id = aws_eip.nat.id
subnet_id = aws_subnet.public[0].id

tags = merge(
var.tags,
{
Name = "${var.project_prefix}-${var.environment}-nat-gw"
}
)

depends_on = [aws_internet_gateway.main]
}

# A single route table for all public subnets.
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id

route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}

tags = merge(
var.tags,
{
Name = "${var.project_prefix}-${var.environment}-public-rt"
}
)
}

# Associating the single public route table with all public subnets.
resource "aws_route_table_association" "public" {
count = length(aws_subnet.public)
subnet_id = aws_subnet.public[count.index].id
route_table_id = aws_route_table.public.id
}

# A single route table for ALL private subnets, pointing to the single NAT Gateway.
resource "aws_route_table" "private" {
vpc_id = aws_vpc.main.id

route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.main.id
}

tags = merge(
var.tags,
{
Name = "${var.project_prefix}-${var.environment}-private-rt"
}
)
}

# Associate the single private route table with all private subnets.
resource "aws_route_table_association" "private" {
count = length(aws_subnet.private)
subnet_id = aws_subnet.private[count.index].id
route_table_id = aws_route_table.private.id
}

# Application Load Balancer

resource "aws_security_group" "alb" {
name = "${var.project_prefix}-${var.environment}-alb-sg"
description = "Controls access to the ALB"
vpc_id = aws_vpc.main.id


ingress {
protocol = "tcp"
from_port = 80
to_port = 80
cidr_blocks = ["0.0.0.0/0"]
description = "Allow HTTP traffic from anywhere for HTTPS redirection"
}

ingress {
protocol = "tcp"
from_port = 443
to_port = 443
cidr_blocks = ["0.0.0.0/0"]
description = "Allow HTTPS traffic from anywhere"
}

egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
description = "Allow all outbound traffic"
}

tags = merge(
var.tags,
{
Name = "${var.project_prefix}-${var.environment}-alb-sg"
}
)
}

resource "aws_lb" "main" {
name = "${var.project_prefix}-${var.environment}-alb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.alb.id]
subnets = aws_subnet.public[*].id

# Deletion protection should be enabled via a variable for production.
enable_deletion_protection = var.environment == "prod" ? true : false

tags = merge(
var.tags,
{
Name = "${var.project_prefix}-${var.environment}-alb"
}
)
}

resource "aws_lb_listener" "http" {
load_balancer_arn = aws_lb.main.arn
port = 80
protocol = "HTTP"

default_action {
type = "redirect"
redirect {
port = "443"
protocol = "HTTPS"
status_code = "HTTP_301"
}
}
}

resource "aws_lb_listener" "https" {
load_balancer_arn = aws_lb.main.arn
port = 443
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-2016-08"
certificate_arn = var.acm_certificate_arn

default_action {
type = "fixed-response"
fixed_response {
content_type = "text/plain"
message_body = "404: Not Found. No listener rule configured for this path."
status_code = "404"
}
}
}
Loading