diff --git a/README.md b/README.md new file mode 100644 index 0000000..f6e7a15 --- /dev/null +++ b/README.md @@ -0,0 +1,147 @@ +## TODOs + +- [x] Remove NixOS AMI generation dependency + - [x] PR to nixpkgs adding the all the regions currently available in order to get NixOS + AMIs available on them + - [x] Make sure that the AMIs are available + - [x] Refactor the Terraform config to just use the available AMIs instead of generating + them +- [x] Figure out a way to hide the secrets + - [x] Change personal AWS tokens (due to commit history) + - [x] confirm that IOHK tokens have never been committed +- [x] Add 2 different cardano-node versions to niv +- [x] Add a let in replacement for all the servers targetHost + +## Steps to deploy with Terraform + +- There are regions that do not have NixOS AMIs so one need to generate one and upload it +to those: + - jp + - sg + - au + - br +- Export AWS Secrets +- ~~For that we need an S3 bucket in each region and a set of specific IAM roles (https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html)~~ +- ~~It is not possible to copy official NixOS AMIs from other regions to the ones we need, + so we need to generate ours and upload them~~ + - ~~https://nixos.wiki/wiki/Install_NixOS_on_Amazon_EC2~~ + - ~~https://github.com/NixOS/nixpkgs/issues/85857~~ + - ~~The links above can help.~~ + - ~~Notes: change home_region, bucket and regions vars and edit lines to make_image_public + if needed.~~ +- ~~After that get the AMIs for each region and add them to terraform configuration~~ + +_**NOTE:** As of NixOS 22.05 release, AMIs for all AWS regions are available, so this step +is no longer needed_ + +## How to deploy + +In folder `dev-deployer-terraform`, there's `main.tf` that has the terraform +configuration to deploy NixOS machines on different AWS regions. ~~This +Terraform config also runs 2 bash commands: 1 to create a NixOS image, +and a script to upload the image to an AWS bucket and import it as an image, +making it available in all the regions necessary.~~ + +To run the terraform config from a clean AWS configuration do the following: + +- Make sure all the regions you want are enabled; + - At the current time we do not have any instance in Bahrain, for example. + If we did, then we'd also need to enable global permissions, see: + https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html?icmpid=docs_iam_console +- Make sure the account has the necessary permissions; +- Make sure your credentials are correctly configured: + - `aws configure` +- ~~Make sure you edit `main.tf` and `create-ami.sh` to be in sync (e.g. S3 Bucket name, + home-regions, etc.);~~ +- Do `terraform init`; +- Do `terraform plan` to check you haven't forgotten anything; +- If everything looks good do `terraform apply` and let it run; +- After finished you should have: + - ~~An S3 bucket;~~ + - ~~A role and policy called `vmimport`;~~ + - A security role for each machine's region enabling traffic; + - A key pair for each machine's region; + - An EC2 instance. + +~~Due to the way S3 buckets work if something goes wrong during the plan execution, +you might not be able to perform `terraform destroy`. If that is the case you will have to +delete all the stuff by hand if you want to rerun a script from a clean state. Maybe you can +get without deleting everything and only the S3 bucket and then do `terraform destroy`. On +the other hand if everything finishes successfully you will be able to perform `terraform +destroy`, just make sure the bucket is empty before hand. NOTE: That the AMIs and +respective snapshots won't get deleted so you will have to delete those by hand.~~ + +~~I believe we'll only need to run the deployment once and if needed only rerun the script +to make NixOS AMIs available in new regions. For deployment we should run something like +`terraform apply -target=resource`.~~ + +_**NOTE:** As of NixOS 22.05 release, AMIs for all AWS regions are available, so this +information is no longer accurate_ + +_THINGS TO HAVE IN MIND_: + +- ~~The `create-ami.sh` script will cache things in `$PWD/ami/ec2-images` so you might want +to delete that when trying to obtain a clean state;~~ +- ~~You ought to rename the bucket if wanting to rerun the deployment from a previously +deleted bucket since AWS might take some time to recognize that bucket was deleted.~~ + +After having run the terraform configuration, you have to manually get the public ips +for each machine and add them to the nixops network configuration file (inside folder +`dev-deployer-nixops`. Then you should create a new nixops network and deploy it with +`nixops create -d my-network network.nix` after `nixops deploy -d my-network`. +You should be able to get the IPs for each regions by running the following command: + +`terraform show -json | jq '.values.root_module.child_modules[].resources[].values | "\(.availability_zone) : \(.public_ip)"' | grep -v "null : null"` + +_If one updates the NixOS version of the AMIs be sure to also update the nixpkgs version on +niv to the same one._ + +nixops will try to ssh into the machines as root so you might need to run: + +- `eval \`ssh-agent\`` +- `ssh-add ssh-keys/id_rsa_aws` + +Please _NOTE_ that if the machine you're using to deploy (local machine) has a different +or incompatible nixpkgs version with the one in the remote side (remote machine that is +going to get deployed) - you will notice this with stange errors such as +"service.zfs.expandOnBoot does not exist" - you will need to modify your deployment to use +a different nix path. So after creating the deployment and if you get weird errors as the +one described previously: + +- Run `niv show` to get the nixpkgs version and url; +- Copy the nixpkgs url being used; +- Run `nixops modify -I nixpkgs= -d my-network network.nix +- Try again + +If you want to further configure each individual server you can look into: +https://github.com/input-output-hk/cardano-node/blob/master/nix/nixos/cardano-node-service.nix#L136 +to see all the options available for configuration. + +## Material + +### Flakes + +- https://nixos.wiki/wiki/Flakes +- https://www.tweag.io/blog/2020-07-31-nixos-flakes/ + +### Deploy with nix + +- https://zimbatm.com/notes/deploying-to-aws-with-terraform-and-nix +- https://github.com/tweag/terraform-nixos +- https://github.com/colemickens/nixos-flake-example +- https://github.com/edolstra/flake-compat +- https://github.com/serokell/pegasus-infra +- https://github.com/serokell/deploy-rs + + +### Cardano + +- https://outline.zw3rk.com/share/15015d9b-a6c3-4a71-84fc-6c2ca0cac7cb +- https://github.com/input-output-hk/cardano-node/blob/master/doc/getting-started/building-the-node-using-nix.md + +### AWS + +- https://nixos.wiki/wiki/Install_NixOS_on_Amazon_EC2 +- https://github.com/nh2/nixos-ami-building +- https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html#ami-copy-steps + diff --git a/dev-deployer-nixops/network.nix b/dev-deployer-nixops/network.nix new file mode 100644 index 0000000..aa068ae --- /dev/null +++ b/dev-deployer-nixops/network.nix @@ -0,0 +1,473 @@ +let + # ifMainnet abstracts over the 2 cardano-node instances we will be running. + # i == 0 means the first instance that's going to run mainnet, other instances + # will run testnet. + ifMainnet = mainnet: testnet: i: if i == 0 then mainnet else testnet; + + # import pinned niv sources + sources = import ../nix/sources.nix; + pkgs = import sources.nixpkgs { }; + + # Double check if they are not pinned to the same version. + # If you want to change the version of a particular branch, for example: + # niv update cardano-node-testnet -b + cardano-node-mainnet = (import sources.cardano-node-mainnet {}); + cardano-node-testnet = (import sources.cardano-node-testnet {}); + + # Machines IP addresses + af-south-1 = ""; + us-west-1 = ""; + sa-east-1 = ""; + us-east-2 = ""; + ap-southeast-1 = ""; + ap-southeast-2 = ""; + ap-northeast-1 = ""; + eu-west-3 = ""; + + mainnet-port = 7776; + testnet-port = 7777; + +in +{ + network.description = "IOHK Networking Team - Network"; + + # Each deployment creates a new profile generation to able to run nixops + # rollback + network.enableRollback = true; + + # Common configuration shared between all servers + defaults = { config, ... }: { + # import nixos modules: + # - Amazon image configuration (that was used to create the AMI) + # - The cardano-node-service nixos module + imports = [ + "${sources.nixpkgs.outPath}/nixos/modules/virtualisation/amazon-image.nix" + + # Doesn't matter if we use the mainnet or testnet ones since we are going to + # overwrite the cardano-node packages in the cardano-node service if needed. + # + # I am making t he assummption that it does not matter (at least for now) which + # service version we import here. + cardano-node-mainnet.nixosModules.cardano-node + ]; + + # Packages to be installed system-wide. We need at least cardano-node + environment = { + systemPackages = with pkgs; [ + vim + yq + jq + ]; + }; + + # Needed according to: + # https://www.mikemcgirr.com/blog/2020-05-01-deploying-a-blog-with-terraform-and-nixos.html + ec2.hvm = true; + + services.cardano-node = { + enable = true; + instances = 2; + useNewTopology = true; + + # If you wish to overwrite the cardano-node package to a different one. + # By default it runs the cardano-node-mainnet one. + # You ought to put this on a particular server instead of in the default atttribute + + # cardanoNodePackages = + # cardano-node-mainnet.legacyPackages.x86_64-linux.cardanoNodePackages; + + + # Note that in the `systemctl status` call we are going the instance + # running on a file called 'db-testnet-0', since the default environment + # is "testnet" and the db file is called after this environment variable. + # 'extraNodeInstanceConfig' does not overwrite the environment variable, + # only the nodeConfig values so, although misleading we will be running + # a mainnet node with a 'db-testnet-0' file. + + extraNodeInstanceConfig = + ifMainnet config.services.cardano-node.environments.mainnet.nodeConfig + config.services.cardano-node.environments.testnet.nodeConfig; + + # We can not programatically give a particular environment for each + # instance, but luckily we can programatically give different + # producers/publicProducers (local and public root peers in P2P) + # depending on the instance. + # + # And due to the first fact we have to manually change the iohk + # relays depending on the correct instance environment + + publicProducers = [ ]; + instancePublicProducers = + ifMainnet [{ + accessPoints = [{ + address = "relays-new.cardano-mainnet.iohk.io"; + port = 3001; + }]; + advertise = false; + }] + [{ + accessPoints = [{ + address = "relays-new.cardano-testnet.iohk.io"; + port = 3001; + }]; + advertise = false; + }]; + }; + }; + + # Server definitions + + server-us-west = { config, pkgs, ... }: { + # Says we are going to deploy to an already existing NixOS machine + deployment.targetHost = us-west-1; + + # cardano-node service configuration + services.cardano-node = { + instanceProducers = + ifMainnet [ { accessPoints = [ + { address = nodes.server-us-east.config.deployment.targetHost; + port = mainnet-port; + } + { address = nodes.server-br.config.deployment.targetHost; + port = mainnet-port; + } + { address = nodes.server-jp.config.deployment.targetHost; + port = mainnet-port; + } + { address = nodes.server-au.config.deployment.targetHost; + port = mainnet-port; + } + ]; + advertise = false; + valency = 4; + } + ] + [ { accessPoints = [ + { address = nodes.server-us-east.config.deployment.targetHost; + port = testnet-port; + } + { address = nodes.server-br.config.deployment.targetHost; + port = testnet-port; + } + { address = nodes.server-jp.config.deployment.targetHost; + port = testnet-port; + } + { address = nodes.server-au.config.deployment.targetHost; + port = testnet-port; + } + { address = "13.52.93.226"; + port = testnet-port; + } + ]; + advertise = false; + valency = 4; + } + ]; + }; + }; + + server-us-east = { config, pkgs, ... }: { + deployment.targetHost = us-east-2; + + # cardano-node service configuration + services.cardano-node = { + instanceProducers = + ifMainnet [ { accessPoints = [ + { address = nodes.server-eu.config.deployment.targetHost; + port = mainnet-port; + } + { address = nodes.server-us-west.config.deployment.targetHost; + port = mainnet-port; + } + { address = nodes.server-br.config.deployment.targetHost; + port = mainnet-port; + } + ]; + advertise = false; + valency = 3; + } + ] + [ { accessPoints = [ + { address = nodes.server-eu.config.deployment.targetHost; + port = testnet-port; + } + { address = nodes.server-us-west.config.deployment.targetHost; + port = testnet-port; + } + { address = nodes.server-br.config.deployment.targetHost; + port = testnet-port; + } + { address = "3.142.182.220"; + port = 3001; + } + ]; + advertise = false; + valency = 4; + } + ]; + }; + }; + + server-jp = { config, pkgs, ... }: { + deployment.targetHost = ap-northeast-1; + + # cardano-node service configuration + services.cardano-node = { + instanceProducers = + ifMainnet [ { accessPoints = [ + { address = nodes.server-sg.config.deployment.targetHost; + port = mainnet-port; + } + { address = nodes.server-us-west.config.deployment.targetHost; + port = mainnet-port; + } + { address = nodes.server-au.config.deployment.targetHost; + port = mainnet-port; + } + ]; + advertise = false; + valency = 3; + } + ] + [ { accessPoints = [ + { address = nodes.server-sg.config.deployment.targetHost; + port = testnet-port; + } + { address = nodes.server-us-west.config.deployment.targetHost; + port = testnet-port; + } + { address = nodes.server-au.config.deployment.targetHost; + port = testnet-port; + } + { address = "54.238.39.214"; + port = testnet-port; + } + ]; + advertise = false; + valency = 4; + } + ]; + }; + }; + + server-sg = { config, pkgs, ... }: { + deployment.targetHost = ap-southeast-1; + + # cardano-node service configuration + services.cardano-node = { + instanceProducers = + ifMainnet [ { accessPoints = [ + { address = nodes.server-sa.config.deployment.targetHost; + port = mainnet-port; + } + { address = nodes.server-eu.config.deployment.targetHost; + port = mainnet-port; + } + { address = nodes.server-au.config.deployment.targetHost; + port = mainnet-port; + } + { address = nodes.server-jp.config.deployment.targetHost; + port = mainnet-port; + } + ]; + advertise = false; + valency = 4; + } + ] + [ { accessPoints = [ + { address = nodes.server-sa.config.deployment.targetHost; + port = testnet-port; + } + { address = nodes.server-eu.config.deployment.targetHost; + port = testnet-port; + } + { address = nodes.server-au.config.deployment.targetHost; + port = testnet-port; + } + { address = nodes.server-jp.config.deployment.targetHost; + port = testnet-port; + } + { address = "52.74.94.66"; + port = testnet-port; + } + ]; + advertise = false; + valency = 5; + } + ]; + }; + }; + + server-au = { config, pkgs, ... }: { + deployment.targetHost = ap-southeast-2; + + # cardano-node service configuration + services.cardano-node = { + instanceProducers = + ifMainnet [ { accessPoints = [ + { address = nodes.server-sg.config.deployment.targetHost; + port = mainnet-port; + } + { address = nodes.server-jp.config.deployment.targetHost; + port = mainnet-port; + } + { address = nodes.server-us-west.config.deployment.targetHost; + port = mainnet-port; + } + ]; + advertise = false; + valency = 3; + } + ] + [ { accessPoints = [ + { address = nodes.server-sg.config.deployment.targetHost; + port = testnet-port; + } + { address = nodes.server-jp.config.deployment.targetHost; + port = testnet-port; + } + { address = nodes.server-us-west.config.deployment.targetHost; + port = testnet-port; + } + ]; + advertise = false; + valency = 3; + } + ]; + }; + }; + + server-br = { config, pkgs, ... }: { + deployment.targetHost = sa-east-1; + + # cardano-node service configuration + services.cardano-node = { + instanceProducers = + ifMainnet [ { accessPoints = [ + { address = nodes.server-sa.config.deployment.targetHost; + port = mainnet-port; + } + { address = nodes.server-us-west.config.deployment.targetHost; + port = mainnet-port; + } + { address = nodes.server-us-east.config.deployment.targetHost; + port = mainnet-port; + } + ]; + advertise = false; + valency = 3; + } + ] + [ { accessPoints = [ + { address = nodes.server-sa.config.deployment.targetHost; + port = testnet-port; + } + { address = nodes.server-us-west.config.deployment.targetHost; + port = testnet-port; + } + { address = nodes.server-us-east.config.deployment.targetHost; + port = testnet-port; + } + { address = "18.229.177.239"; + port = testnet-port; + } + ]; + advertise = false; + valency = 4; + } + ]; + }; + }; + + server-sa = { config, pkgs, ... }: { + deployment.targetHost = af-south-1; + + # cardano-node service configuration + services.cardano-node = { + instanceProducers = + ifMainnet [ { accessPoints = [ + { address = nodes.server-sg.config.deployment.targetHost; + port = mainnet-port; + } + { address = nodes.server-br.config.deployment.targetHost; + port = mainnet-port; + } + { address = nodes.server-eu.config.deployment.targetHost; + port = mainnet-port; + } + ]; + advertise = false; + valency = 3; + } + ] + [ { accessPoints = [ + { address = nodes.server-sg.config.deployment.targetHost; + port = testnet-port; + } + { address = nodes.server-br.config.deployment.targetHost; + port = testnet-port; + } + { address = nodes.server-eu.config.deployment.targetHost; + port = testnet-port; + } + ]; + advertise = false; + valency = 3; + } + ]; + }; + }; + + server-eu = { config, pkgs, ... }: { + deployment.targetHost = eu-west-3; + + # cardano-node service configuration + services.cardano-node = { + instanceProducers = + ifMainnet [ { accessPoints = [ + { address = nodes.server-sa.config.deployment.targetHost; + port = mainnet-port; + } + { address = nodes.server-sg.config.deployment.targetHost; + port = mainnet-port; + } + { address = nodes.server-us-east.config.deployment.targetHost; + port = mainnet-port; + } + { address = "88.99.169.172"; + port = mainnet-port; + } + { address = "95.217.1.58"; + port = mainnet-port; + } + ]; + advertise = false; + valency = 5; + } + ] + [ { accessPoints = [ + { address = nodes.server-sa.config.deployment.targetHost; + port = testnet-port; + } + { address = nodes.server-sg.config.deployment.targetHost; + port = testnet-port; + } + { address = nodes.server-us-east.config.deployment.targetHost; + port = testnet-port; + } + { + address = "88.99.169.172"; + port = testnet-port; + } + { + address = "18.169.36.236"; + port = 3001; + } + ]; + advertise = false; + valency = 5; + } + ]; + }; + }; +} diff --git a/dev-deployer-terraform/main.tf b/dev-deployer-terraform/main.tf new file mode 100644 index 0000000..2f18c03 --- /dev/null +++ b/dev-deployer-terraform/main.tf @@ -0,0 +1,99 @@ +# This will be the provider region where the bucket is going to be created. +# Note that this region should be the same home_region in the create-amis.sh +# file. +# +# Source for AMIs: +# https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/virtualisation/amazon-ec2-amis.nix#L445 +provider "aws" { + # Configuration options + region = "eu-west-1" +} + +variable "nixos-version" { default = "22.05" } + +# Assuming the AMIs for older NixOS versions do not change with each new release +resource "null_resource" "get-amis" { + depends_on = [] + provisioner "local-exec" { + command = "wget https://raw.githubusercontent.com/NixOS/nixpkgs/master/nixos/modules/virtualisation/amazon-ec2-amis.nix" + } +} + +resource "null_resource" "create-ami-json" { + depends_on = [ null_resource.get-amis ] + provisioner "local-exec" { + command = "nix eval --json -f amazon-ec2-amis.nix | jq > amis.json" + } +} + +# Load the AMI information file +# +data "local_file" "created-amis" { + depends_on = [ null_resource.create-ami-json ] + filename = "${path.module}/amis.json" +} + +module "us-west" { + source = "./modules/multi-region" + ami = jsondecode(data.local_file.created-amis.content)[var.nixos-version]["${data.aws_region.us-west.name}"]["x86_64-linux"]["hvm-ebs"] + providers = { + aws = aws.us-west + } +} + +module "us-east" { + source = "./modules/multi-region" + ami = jsondecode(data.local_file.created-amis.content)[var.nixos-version]["${data.aws_region.us-east.name}"]["x86_64-linux"]["hvm-ebs"] + providers = { + aws = aws.us-east + } +} + +module "jp" { + source = "./modules/multi-region" + ami = jsondecode(data.local_file.created-amis.content)[var.nixos-version]["${data.aws_region.jp.name}"]["x86_64-linux"]["hvm-ebs"] + providers = { + aws = aws.jp + } +} + +module "sg" { + source = "./modules/multi-region" + ami = jsondecode(data.local_file.created-amis.content)[var.nixos-version]["${data.aws_region.sg.name}"]["x86_64-linux"]["hvm-ebs"] + providers = { + aws = aws.sg + } +} + +module "au" { + source = "./modules/multi-region" + ami = jsondecode(data.local_file.created-amis.content)[var.nixos-version]["${data.aws_region.au.name}"]["x86_64-linux"]["hvm-ebs"] + providers = { + aws = aws.au + } +} + +module "br" { + source = "./modules/multi-region" + ami = jsondecode(data.local_file.created-amis.content)[var.nixos-version]["${data.aws_region.br.name}"]["x86_64-linux"]["hvm-ebs"] + providers = { + aws = aws.br + } +} + +module "sa" { + source = "./modules/multi-region" + ami = jsondecode(data.local_file.created-amis.content)[var.nixos-version]["${data.aws_region.sa.name}"]["x86_64-linux"]["hvm-ebs"] + instance_type = "t3.2xlarge" + providers = { + aws = aws.sa + } +} + +module "eu" { + source = "./modules/multi-region" + ami = jsondecode(data.local_file.created-amis.content)[var.nixos-version]["${data.aws_region.eu.name}"]["x86_64-linux"]["hvm-ebs"] + providers = { + aws = aws.eu + } +} diff --git a/dev-deployer-terraform/modules/multi-region/main.tf b/dev-deployer-terraform/modules/multi-region/main.tf new file mode 100644 index 0000000..bc6a779 --- /dev/null +++ b/dev-deployer-terraform/modules/multi-region/main.tf @@ -0,0 +1,85 @@ +variable "ami" {} +variable "instance_type" { default = "t3a.2xlarge" } + +# If universal access is needed from another machine this is the place to add it. +# ---v dev-deployer machine +variable "allowed-to-access" { default = [ "3.124.147.122/32" + ] } + +terraform { + required_providers { + aws = { + source = "hashicorp/aws" + } + } +} + +resource "aws_security_group" "allow_deployer_sg" { + name = "allow_deployer" + #vpc_id = aws_default_vpc.default.id + + ingress { + description = "allow all from deployer" + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = var.allowed-to-access + } +} + +resource "aws_security_group" "allow_all_sg" { + name = "allow_all_sg" + #vpc_id = aws_default_vpc.default.id + + ingress { + description = "allow all hosts to access any port greater than 1023 (IPv4)" + from_port = 1024 + to_port = 65535 + protocol = "tcp" + cidr_blocks = ["0.0.0.0/0"] + } + + ingress { + description = "allow all hosts to access any port greater than 1023 (IPv6)" + from_port = 1024 + to_port = 65535 + protocol = "tcp" + ipv6_cidr_blocks = ["::/0"] + } + + egress { + description = "" + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + egress { + description = "" + from_port = 0 + to_port = 0 + protocol = "-1" + ipv6_cidr_blocks = ["::/0"] + } +} + +resource "aws_key_pair" "admin_kp" { + key_name = "admin_kp" + public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7DmrNylD3NQ0Z1I0i9EqNiPp++gXxgDIrno4jJIR8RBE3oFXjhbY0yJUZt9Tn4uDESQfL0INDO0alWb79OURUL7fKaeXWqbcYolUiCqaxM2bPf8s4giTY0JdG7xaBJq5jSlO34+l1p7DV+tyTHTUYN69jgrc+FMLuQcVDKrXeBKnbyt4YD/hXOuX898D0P554CmM/OMzs0x3DAboqgjmBhoMbdpBqeO6Wmc663SP9D2sTHyOuuUBJFFK9mPNstLMMLJGHsPzzQxsGTp8bwl2yOu9Z3gEp9tC6uvLUHW+P3OCh1vFsLCgYi6L4q/RKAqWni6Oc3i/5i9rF+mJqmBRV7E1zfe9CY9clSSgWgN6vLbhKEIzRvXfzHY+1zUpcziL0aniet0s2yGq5yRhlJWRM9BC/LTcEA8lJJUdEI1C0sI3iEYYKGgKefFhbjTGTNsBk0CzbFKQJqKeMH/wQRBu1sZJwk9khRissDoNGHVWSY9CwS+z/IOfzSDT5eAG5C3M= dev@dev-deployer" +} + +resource "aws_instance" "cad-2694-node" { + ami = var.ami + instance_type = var.instance_type + key_name = "admin_kp" + ebs_optimized = true + root_block_device { + volume_size = 90 + volume_type = "standard" + } + security_groups = [ + aws_security_group.allow_deployer_sg.name, + aws_security_group.allow_all_sg.name + ] +} diff --git a/dev-deployer-terraform/providers.tf b/dev-deployer-terraform/providers.tf new file mode 100644 index 0000000..a3fc18b --- /dev/null +++ b/dev-deployer-terraform/providers.tf @@ -0,0 +1,87 @@ +terraform { + required_providers { + aws = { + source = "hashicorp/aws" + } + } +} + +provider "aws" { + alias = "us-west" + region = "us-west-1" +} + +# Needed to get the region for this particular provider +data "aws_region" "us-west" { + provider = aws.us-west +} + +provider "aws" { + alias = "us-east" + region = "us-east-2" +} + +# Needed to get the region for this particular provider +data "aws_region" "us-east" { + provider = aws.us-east +} + +provider "aws" { + alias = "jp" + region = "ap-northeast-1" +} + +# Needed to get the region for this particular provider +data "aws_region" "jp" { + provider = aws.jp +} + +provider "aws" { + alias = "sg" + region = "ap-southeast-1" +} + +# Needed to get the region for this particular provider +data "aws_region" "sg" { + provider = aws.sg +} + +provider "aws" { + alias = "au" + region = "ap-southeast-2" +} + +# Needed to get the region for this particular provider +data "aws_region" "au" { + provider = aws.au +} + +provider "aws" { + alias = "br" + region = "sa-east-1" +} + +# Needed to get the region for this particular provider +data "aws_region" "br" { + provider = aws.br +} + +provider "aws" { + alias = "sa" + region = "af-south-1" +} + +# Needed to get the region for this particular provider +data "aws_region" "sa" { + provider = aws.sa +} + +provider "aws" { + alias = "eu" + region = "eu-west-3" +} + +# Needed to get the region for this particular provider +data "aws_region" "eu" { + provider = aws.eu +} diff --git a/nix/sources.json b/nix/sources.json new file mode 100644 index 0000000..25f1d13 --- /dev/null +++ b/nix/sources.json @@ -0,0 +1,38 @@ +{ + "cardano-node-mainnet": { + "branch": "master", + "description": "The core component that is used to participate in a Cardano decentralised blockchain.", + "homepage": "https://cardano.org", + "owner": "input-output-hk", + "repo": "cardano-node", + "rev": "ca78153653c19f4704fe7ce9a5a32297bf86a211", + "sha256": "1rah24gvbnyprlk3liq18vswxn1v1ypk61n2a48kn6w4y196h2jv", + "type": "tarball", + "url": "https://github.com/input-output-hk/cardano-node/archive/ca78153653c19f4704fe7ce9a5a32297bf86a211.tar.gz", + "url_template": "https://github.com///archive/.tar.gz" + }, + "cardano-node-testnet": { + "branch": "master", + "description": "The core component that is used to participate in a Cardano decentralised blockchain.", + "homepage": "https://cardano.org", + "owner": "input-output-hk", + "repo": "cardano-node", + "rev": "ca78153653c19f4704fe7ce9a5a32297bf86a211", + "sha256": "1rah24gvbnyprlk3liq18vswxn1v1ypk61n2a48kn6w4y196h2jv", + "type": "tarball", + "url": "https://github.com/input-output-hk/cardano-node/archive/ca78153653c19f4704fe7ce9a5a32297bf86a211.tar.gz", + "url_template": "https://github.com///archive/.tar.gz" + }, + "nixpkgs": { + "branch": "22.05", + "description": "Nix Packages collection", + "homepage": "", + "owner": "NixOS", + "repo": "nixpkgs", + "rev": "ce6aa13369b667ac2542593170993504932eb836", + "sha256": "0d643wp3l77hv2pmg2fi7vyxn4rwy0iyr8djcw1h5x72315ck9ik", + "type": "tarball", + "url": "https://github.com/NixOS/nixpkgs/archive/ce6aa13369b667ac2542593170993504932eb836.tar.gz", + "url_template": "https://github.com///archive/.tar.gz" + } +} diff --git a/nix/sources.nix b/nix/sources.nix new file mode 100644 index 0000000..9a01c8a --- /dev/null +++ b/nix/sources.nix @@ -0,0 +1,194 @@ +# This file has been generated by Niv. + +let + + # + # The fetchers. fetch_ fetches specs of type . + # + + fetch_file = pkgs: name: spec: + let + name' = sanitizeName name + "-src"; + in + if spec.builtin or true then + builtins_fetchurl { inherit (spec) url sha256; name = name'; } + else + pkgs.fetchurl { inherit (spec) url sha256; name = name'; }; + + fetch_tarball = pkgs: name: spec: + let + name' = sanitizeName name + "-src"; + in + if spec.builtin or true then + builtins_fetchTarball { name = name'; inherit (spec) url sha256; } + else + pkgs.fetchzip { name = name'; inherit (spec) url sha256; }; + + fetch_git = name: spec: + let + ref = + if spec ? ref then spec.ref else + if spec ? branch then "refs/heads/${spec.branch}" else + if spec ? tag then "refs/tags/${spec.tag}" else + abort "In git source '${name}': Please specify `ref`, `tag` or `branch`!"; + submodules = if spec ? submodules then spec.submodules else false; + submoduleArg = + let + nixSupportsSubmodules = builtins.compareVersions builtins.nixVersion "2.4" >= 0; + emptyArgWithWarning = + if submodules == true + then + builtins.trace + ( + "The niv input \"${name}\" uses submodules " + + "but your nix's (${builtins.nixVersion}) builtins.fetchGit " + + "does not support them" + ) + {} + else {}; + in + if nixSupportsSubmodules + then { inherit submodules; } + else emptyArgWithWarning; + in + builtins.fetchGit + ({ url = spec.repo; inherit (spec) rev; inherit ref; } // submoduleArg); + + fetch_local = spec: spec.path; + + fetch_builtin-tarball = name: throw + ''[${name}] The niv type "builtin-tarball" is deprecated. You should instead use `builtin = true`. + $ niv modify ${name} -a type=tarball -a builtin=true''; + + fetch_builtin-url = name: throw + ''[${name}] The niv type "builtin-url" will soon be deprecated. You should instead use `builtin = true`. + $ niv modify ${name} -a type=file -a builtin=true''; + + # + # Various helpers + # + + # https://github.com/NixOS/nixpkgs/pull/83241/files#diff-c6f540a4f3bfa4b0e8b6bafd4cd54e8bR695 + sanitizeName = name: + ( + concatMapStrings (s: if builtins.isList s then "-" else s) + ( + builtins.split "[^[:alnum:]+._?=-]+" + ((x: builtins.elemAt (builtins.match "\\.*(.*)" x) 0) name) + ) + ); + + # The set of packages used when specs are fetched using non-builtins. + mkPkgs = sources: system: + let + sourcesNixpkgs = + import (builtins_fetchTarball { inherit (sources.nixpkgs) url sha256; }) { inherit system; }; + hasNixpkgsPath = builtins.any (x: x.prefix == "nixpkgs") builtins.nixPath; + hasThisAsNixpkgsPath = == ./.; + in + if builtins.hasAttr "nixpkgs" sources + then sourcesNixpkgs + else if hasNixpkgsPath && ! hasThisAsNixpkgsPath then + import {} + else + abort + '' + Please specify either (through -I or NIX_PATH=nixpkgs=...) or + add a package called "nixpkgs" to your sources.json. + ''; + + # The actual fetching function. + fetch = pkgs: name: spec: + + if ! builtins.hasAttr "type" spec then + abort "ERROR: niv spec ${name} does not have a 'type' attribute" + else if spec.type == "file" then fetch_file pkgs name spec + else if spec.type == "tarball" then fetch_tarball pkgs name spec + else if spec.type == "git" then fetch_git name spec + else if spec.type == "local" then fetch_local spec + else if spec.type == "builtin-tarball" then fetch_builtin-tarball name + else if spec.type == "builtin-url" then fetch_builtin-url name + else + abort "ERROR: niv spec ${name} has unknown type ${builtins.toJSON spec.type}"; + + # If the environment variable NIV_OVERRIDE_${name} is set, then use + # the path directly as opposed to the fetched source. + replace = name: drv: + let + saneName = stringAsChars (c: if isNull (builtins.match "[a-zA-Z0-9]" c) then "_" else c) name; + ersatz = builtins.getEnv "NIV_OVERRIDE_${saneName}"; + in + if ersatz == "" then drv else + # this turns the string into an actual Nix path (for both absolute and + # relative paths) + if builtins.substring 0 1 ersatz == "/" then /. + ersatz else /. + builtins.getEnv "PWD" + "/${ersatz}"; + + # Ports of functions for older nix versions + + # a Nix version of mapAttrs if the built-in doesn't exist + mapAttrs = builtins.mapAttrs or ( + f: set: with builtins; + listToAttrs (map (attr: { name = attr; value = f attr set.${attr}; }) (attrNames set)) + ); + + # https://github.com/NixOS/nixpkgs/blob/0258808f5744ca980b9a1f24fe0b1e6f0fecee9c/lib/lists.nix#L295 + range = first: last: if first > last then [] else builtins.genList (n: first + n) (last - first + 1); + + # https://github.com/NixOS/nixpkgs/blob/0258808f5744ca980b9a1f24fe0b1e6f0fecee9c/lib/strings.nix#L257 + stringToCharacters = s: map (p: builtins.substring p 1 s) (range 0 (builtins.stringLength s - 1)); + + # https://github.com/NixOS/nixpkgs/blob/0258808f5744ca980b9a1f24fe0b1e6f0fecee9c/lib/strings.nix#L269 + stringAsChars = f: s: concatStrings (map f (stringToCharacters s)); + concatMapStrings = f: list: concatStrings (map f list); + concatStrings = builtins.concatStringsSep ""; + + # https://github.com/NixOS/nixpkgs/blob/8a9f58a375c401b96da862d969f66429def1d118/lib/attrsets.nix#L331 + optionalAttrs = cond: as: if cond then as else {}; + + # fetchTarball version that is compatible between all the versions of Nix + builtins_fetchTarball = { url, name ? null, sha256 }@attrs: + let + inherit (builtins) lessThan nixVersion fetchTarball; + in + if lessThan nixVersion "1.12" then + fetchTarball ({ inherit url; } // (optionalAttrs (!isNull name) { inherit name; })) + else + fetchTarball attrs; + + # fetchurl version that is compatible between all the versions of Nix + builtins_fetchurl = { url, name ? null, sha256 }@attrs: + let + inherit (builtins) lessThan nixVersion fetchurl; + in + if lessThan nixVersion "1.12" then + fetchurl ({ inherit url; } // (optionalAttrs (!isNull name) { inherit name; })) + else + fetchurl attrs; + + # Create the final "sources" from the config + mkSources = config: + mapAttrs ( + name: spec: + if builtins.hasAttr "outPath" spec + then abort + "The values in sources.json should not have an 'outPath' attribute" + else + spec // { outPath = replace name (fetch config.pkgs name spec); } + ) config.sources; + + # The "config" used by the fetchers + mkConfig = + { sourcesFile ? if builtins.pathExists ./sources.json then ./sources.json else null + , sources ? if isNull sourcesFile then {} else builtins.fromJSON (builtins.readFile sourcesFile) + , system ? builtins.currentSystem + , pkgs ? mkPkgs sources system + }: rec { + # The sources, i.e. the attribute set of spec name to spec + inherit sources; + + # The "pkgs" (evaluated nixpkgs) to use for e.g. non-builtin fetchers + inherit pkgs; + }; + +in +mkSources (mkConfig {}) // { __functor = _: settings: mkSources (mkConfig settings); } diff --git a/shell.nix b/shell.nix new file mode 100644 index 0000000..c1c2966 --- /dev/null +++ b/shell.nix @@ -0,0 +1,15 @@ +# import pinned niv sources +let sources = import ./nix/sources.nix; + pkgs = import sources.nixpkgs { }; + +in pkgs.mkShell { + # nativeBuildInputs is usually what you want -- tools you need to run + nativeBuildInputs = [ pkgs.awscli2 + pkgs.terraform + pkgs.ec2-api-tools + pkgs.ec2-ami-tools + pkgs.nixops + pkgs.nix + pkgs.wget + ]; + }