Skip to content
Rion edited this page Aug 20, 2024 · 37 revisions

Huge thanks to: https://github.com/keithvassallomt/parsec-aws-automation

This guide explains how to create a cloud workstation/gaming server on AWS.

The purpose of this guide is to:

  • Explain how to use this script on EC2.
  • Allow you to install any game or app you want, on a powerful system capable of most tasks.
  • Do it as cheaply as possible.
  • Achieve advanced features like DLSS and Ray Tracing.

Pricing Overview

Before you get started, you should understand the pricing. This overview is assuming some costs are overruled by the AWS free tier (beyond just the 12 months). The first cost is the instance. If you used it for ten hours, you would be charged about $2.50 for a G4DN.xlarge instance (us-west-2). For your storage, if it was kept the entire month, you would only play 0.015 * your storage amount. For example, 128 gigabytes totals to 0.015 * 128 which equals 1.92. This is cheaper compared to snapshot pricing kept for the entire month at $6.40. The more you store, the better the savings as well. For example, snapshot pricing for 512 gigabytes is $25.60 vs $7.68 for storing a cold HDD.

Another major cost of AWS is outgoing bandwidth. You may never meet the free bandwidth limit of 100 GBs every month of data going outside of AWS (like connecting and using Sunshine) but if you want to save bandwidth, you can turn down the allowed amount on Parsec and Moonlight. For data going INTO AWS, (downloading games or lots of files onto your machine) the data is free without limit. You can download or do whatever you need in as small or large of a quantity as you need. Also, if you didn't know, your internet connection with AWS is usually very fast. At minimum in most cases, you're getting 1 GBPS or higher with very low ping and symmetrical speeds so you might not sit and download games and files forever.

Pricing/power compared to other services

The most direct comparison you can make with AWS is alternative cloud providers like Shadow and Paperspace. Shadow's (Boost, as seen here) costs $30/month. According to Shadow, it has a GTX 1080 (usually just an equivalent like P5000), eight vCores, 12 gigabytes of RAM, a 512 gig SSD by default with option for an HDD for expansion plus a normal Windows 10 install with key and a 1 GBPS network connection.

The equivalent to this for AWS is the G4dn.2xlarge. Based in Oregon, AWS has a few benefits.

  • The cost is $0.4436/hour as a spot instance. This is an upgrade from Shadow without being too expensive.

  • The RAM on this instance is 32 gigs, double from Shadow.

  • The GPU is from the 20 series while Shadow's is from the 10 series.

  • The CPU has the same core count.

  • Storage is upgradeable without a separate drive or using an HDD.

Making the biggest difference/benefit for Shadow be storage, no time limits/hourly pricing and Windows 10. This is important because some people may use their cloud computers for a long time or not have to worry about how much time is used and some may want the Microsoft Store.

Paperspace, the next option, is pretty similar to AWS itself. The biggest factor here though is the cost. To elaborate, you can get 2 TBs of storage, but expect to pay $120/month while on AWS, the same amount of storage is technically $30/month (when using this script). If you want an RTX4000, (a really great and cost effective GPU option on Paperspace), it's about .56 cents/hour which does not include IP addresses.

This is a large contrast when compared to AWS which can go as low as .23 cents/hour and include a free IP address under the 12 month free tier. Paperspace may offer free bandwidth but it is at the cost of also paying for a static IP address as long as the instance exists, even while it's off. You can mitigate this using Tailscale. However, not everyone can use a VPN.

Paperspace is still a good choice because you can get a lot of power from Paperspace, the cards on offer are built for Workstation purposes, plus they offer a student discount. There's even an option for Windows 10 if you bring your own key. It's simplified for most people, so you might have an easier time using Paperspace instead of AWS.

If you're not looking to roll your own server, you can choose a prebuilt service for cloud gaming. Options like GeForce Now, Xbox Cloud Gaming, and Amazon Luna are available. Xbox Cloud Gaming requires a subscription to Game Pass Ultimate, priced at $19.99/month, while GeForce Now starts at $9.99/month. These are cheap prices but both services have limitations, such as restricted game libraries and platform compatibility issues. For example, some games on GeForce Now only support Steam or Epic copies, excluding those from using their copies of titles which are sourced from platforms like GOG. Additionally, indie games, certain mods, custom textures, and PC multiplayer may not be fully supported depending on the title and platform. Amazon Luna, priced at $9.99/month, offers a smooth experience seemingly right on the same AWS infrastructure but its game library is also limited. It may not seem like it after reading about some of the downsides bere but due to the cost of these services and ease of access when exploring these options may be a better experience or cheaper bill than rolling out your own server.

Let's get started. First, get an AWS account. It is free to register and you do not need to select a paid support plan. Once you give billing info, you'll be into an account ready to begin step 1.

Step 1: Request a limit increase

AWS like many cloud providers rely on you contacting them and asking for a limit increase. This is something only you can do. To do so, open the "Service Quotas" dashboard by searching for it.

Screenshot showing service quotas in search

Then go to AWS Services and search for EC2 and select "Amazon Elastic Compute Cloud (Amazon EC2)"

Inside here, search for "G" and then select "All G and VT Spot Instance Requests" where you can then press "Request quota increase" to begin the process. From here, specify that you want four vCPUs. This limit can be larger but four is the minimum.

Search results

AWS may close your support case but don't worry, just reopen the case and specify your reasoning for wanting the limit increase. If your account is too new, they mwy not approve you. To get your account, say something like

"Hello, I was interested in starting a small Workstation instance on EC2 so I can improve my workflow and access my work from wherever I am. I am simply requesting a limit increase of (4 or 8) vCPUs in the us-blank-2 region."

Do not copy and paste. State your own reason, state what applications you want to use be polite and thankful!

Check your email frequently for a response to your request.

Step 2: Creating a Windows Server

There are two main Workstation instance types you'll be interacting with namely: G5 and G4DN instances. G4DN include a pretty capable but dated GPU called the Tesla T4 running on the Turing architecture. G5 instances include a beefier 2nd gen AMD server CPU, use custom NVIDIA A10G Tensor Core GPUs and are more modern. AWS claims it to be a GTX 30 series card equivalent. For most, the difference in choice comes to availability and pricing.

The region you want can be different from what you're close to. For example, Northern California (us-west-1) has spot instances but nearby Oregon has spot instances which are not only cheaper but even include more the powerful G5 instances. You'll want to evaluate the best option for you by going to the spot instance pricing page on AWS.

Once you've decided on what instance you want to launch, login to your AWS dashboard after receiving a confirmation email and talking to AWS support about your limit increase. Then search for EC2 on the main console page which will bring you to the EC2 dashboard.

Then click the orange "Launch instances" button on the top right of the screen. You'll then brought to the launch an instance screen.

Under name and tag, click "Add additional tags" then type in the name of your instance, and select "Volumes" under "Resource types" which will name the instance whatever you put there.

Demonstration of proper tagging

Under image, select Windows. Keep in mind, this not Windows 10, instead Windows Server, which does not work with Game Pass games or the Microsoft Store. You can import Windows 10 into EC2 but most people don't need to and it costs extra.

The instance recommended by this script is the G4DN instance type (the Tesla T4).

Image showing GD4N selected

Underneath key pair (login), create a new key pair, this is used for getting the "Administrator" password.

Key pair screen

Before you continue, check out the main wiki page which explains some of the key differences between streaming technologies. If you want the way AWS recommends and a simple setup for this section, try NiceDCV.

Once you have your preferred choice, you need to set up your security group (this is like a firewall). The ports you need open depend on what application you plan to use to stream.

The ports for Sunshine are:

TCP: 35043 47984 47989 47995 47996 48010

UDP: 47998 47999 48000 48010

The port for NiceDCV is:

TCP AND UDP: 8443

Parsec does not need any ports open.

To edit the security group, select the edit button and make your changes.

Edit button

Then select "Add security group rule" and add your ports in. The image below is an example, do this for all ports that need to be opened.

Port example

Doing this opens this server to be connected by anything on the public internet which you can need for certain devices lacking a direction connection method. Like old game consoles or restricted/enterprise enrolled devices.

The next thing that needs configuring is your storage amount. You can choose considerable amounts (like 1TB) but keep in mind that you still must pay for this storage. The recommended amount of storage is 256 GBs. For larger libraries, 512 GBs is still affordable. When creating this volume, please make sure to uncheck "Delete on termination" by selecting the blue "advanced" link on the top right hand corner. Below is an image highlighted of the things you need to click or configure. You'll also want to change the volume type to GP3 for performance and value.

Highlighted image of storage settings

Finally under "Advanced details" which you need to click to get open, select the radial to turn on spot instances.

Request spot instances checked

Now you're ready to launch your instance, under the summary on the right hand side, you should see something similar to the image below and the button to launch the instance. This is your chance to verify everything is setup correctly.

Summary

When you are ready and verified everything, click "launch instance" to start provisioning the server. Keep in mind, it could take up to 4 minutes for the instance to fully provision and allow you to copy your password. While selecting your instance, click connect, select the "RDP client" tab, and copy the IP address of the instance. You should've been prompted to make a key file you downloaded on your computer during setup. Upload that file here to get your password. Keep track of this IP address and save the password in your password manager. To connect, we need to get basic access through RDP. Every Windows computer comes with an RDP client. Just click the Windows flag/logo/start menu, then type RDP.

RDP client

Not using Windows? There are other options for other OSes. For example, there is the Remote Desktop app from the App Store from Microsoft (for Mac) or Remmina (for Linux)

You can install Remmina by going to your terminal and typing... sudo apt install remmina

Finally, you're now in your GPU accelerated instance ready to move on.

Step 3: Using this script

Before beginning, you'll need root keys. Normally, you could use another method but later on, they'll be used locally which can benefit from full account control.

To create some keys, visit the [IAM dashboard] (https://console.aws.amazon.com/iam/home?/security_credentials#/security_credentials). Then scroll down to access keys and then select create access keys which will provide your new root keys. Save them somewhere safe as you'll need them on the server to get video drivers working.

Click the start menu inside your server and click the PowerShell tile on the start menu. Then copy-paste this code into the PowerShell command prompt.

[Net.ServicePointManager]::SecurityProtocol = "tls12, tls11, tls" 
$DownloadScript = "https://github.com/chocolatemoo53/cloudstreaming/archive/refs/heads/main.zip"  
$ArchivePath = "$ENV:UserProfile\Downloads\cloudstreaming"  
(New-Object System.Net.WebClient).DownloadFile($DownloadScript, "$ArchivePath.zip")  
Expand-Archive "$ArchivePath.zip" -DestinationPath $ArchivePath -Force
CD $ArchivePath\cloudstreaming-main | powershell.exe .\welcome.ps1  

Some important notes when going through the script on your server:

  • Use the automatic login option with Sunshine and Parsec. Avoid with NiceDCV.

  • If you're using Sunshine, saying yes to the headless display/monitor is required.

  • If you're a Sunshine user only, turn HAGs on using Windows settings under Graphics Settings in Display.

HAGs setting on

Picture Source

Now you can connect to your instance through whatever way you chosen.

If it's through Sunshine, you can open the Sunshine dashboard at https://localhost:47990 on the server. Then go to Moonlight and connect to your IP address (at least for now), then on the Sunshine dashboard, select the PIN tab and input the PIN seen on your Moonlight device. Restart the server and connect to desktop option.

For both Parsec and Sunshine, if you see only a black screen when connecting or your desktop but cannot interact with it, make sure that capture system keys in Moonlight's settings have been enabled and simply press Windows Key and P or Command and P then use the UP arrow once or DOWN arrow twice then hit enter. At some point, your display will become usable. If not, something is wrong with your setup.

If you're using NiceDCV, simply copy the IP address, download the NiceDCV client, then paste it into the address field. Grab your credentials from earlier and login to the Admin account. That's all you need for NiceDCV.

You can now turn off your instance by shutting down within Windows. Make sure to check if your storage is still there in the AWS dashboard after a couple minutes otherwise, your data will be gone. Do not install your applications or games yet until you have passed or decided to skip the next step.

Step 4: Making storage cheaper

Plan to only use your server temporarily (like a couple days before never using it again)? You may not need these next steps. For everyone else, this saves considerable amounts of money! Please consider setting this up.

Initially, this section contained a Lambda function by TechGuru, which simply took a snapshot of your storage because it was cheaper than keeping a volume on AWS active. The problem is, snapshots, are still quite expensive. There is a better option. The solution is to use a "cold hard drive" which is extremely cheap and a different Lambda function created just for this guide. The cold HDD is really slow though, so you cannot store games or other things without it feeling miserable but you can still use them for their cost benefits.

Technically just using snapshots for this is perfectly fine as the cost of incremental snapshots after the first should be cheaper than what you initially paid. However, if you decide not to use the snapshot for a couple months, it is safe to assume you'd be charged a large amount to keep it for that month even though you didn't use it. This process is done to minimize the cost of storage when it's unused completely and also when used frequently. Something plain snapshots can do but not as cost effective.

The idea is this:

First, you install your stuff onto regular GP3 EBS storage. Average storage, pretty good for most games or other activities. Afterward, you turn off your instance and the script will convert the GP3 storage into a snapshot. Instead of just keeping a snapshot though, it takes your snapshot and then converts it into a hard drive. Then deletes the snapshot. According to AWS, "You incur charges based on the size of your snapshot and the length of time that you keep the snapshot." therefore, the cost should be relatively low as we are not keeping these snapshots alive for long.

You might ask why you need a snapshot to do this process if we're just converting. The reason why it creates a temporary snapshot is because your GP3 volume cannot be natively transferred to a different volume type. The snapshot is there to take a point in time "picture" of your data which can then be transferred over to a different volume type. The snapshot is just a handler of data, you can just delete it and the data is stored onto a different drive for whenever you need it again.

First, search for Lambda in the AWS console and click the create function button.

Select Python and then put this into the lambda_function.py box. You may have to scroll to see this.

import boto3
import botocore

instance_name = 'Workstation'
instance_region = 'us-west-2'

def lambda_handler(event, context):
    ec2 = boto3.client('ec2')
    
    # Connect to region
    ec2 = boto3.client('ec2',region_name=instance_region)
    res_client = boto3.resource('ec2', region_name=instance_region)
    
    # Get all available volumes    
    volumetodelete = ec2.describe_volumes(Filters=[{'Name': 'status', 'Values': ['available']},
                                        {'Name': 'tag:Name', 'Values': [instance_name]}])['Volumes']

        # Create a snapshot of the GP3 volume
        snapshot_response = ec2.create_snapshot(VolumeId=volumetodelete, Description=f"Snapshot for {instance_name}")
        snapshot_id = snapshot_response['SnapshotId']
        
        # Wait for the snapshot to be completed
        ec2.get_waiter('snapshot_completed').wait(SnapshotIds=[snapshot_id])
        
        print(f'Snapshot {snapshot_id} created.')
        
        # Create a COLD HDD (sc1) volume from the snapshot
        cold_hdd_response = ec2.create_volume(
            SnapshotId=snapshot_id,
            VolumeType='sc1',
            AvailabilityZone=volume['AvailabilityZone'],
            TagSpecifications=[
                {
                    'ResourceType': 'volume',
                    'Tags': [
                        {
                            'Key': 'Name',
                            'Value': instance_name
                        },
                    ]
                },
            ]
        )
        cold_hdd_volume_id = cold_hdd_response['VolumeId']
        
        print(f'New sc1 volume {cold_hdd_volume_id} created.')
        
        # Tag the COLD HDD volume with the instance's name
        ec2.create_tags(Resources=[cold_hdd_volume_id], Tags=[{'Key': 'Name', 'Value': instance_name}])
        
        # Delete the snapshot and GP3 volume
        ec2.delete_snapshot(SnapshotId=snapshot_id)
        print(f'Snapshot {snapshot_id} deleted.')
        ec2.delete_volume(VolumeId=gp3_volume['VolumeId'])
        print(f'GP3 volume {gp3_volume["VolumeId"]} deleted.')

Now, go to the "General configuration" tab and click on edit.

General configuration tab

First, you'll want to change the timeout to the max time which is 15 minutes.

Then, select your role at the bottom with the "View the xxxxxxx-role-xxxx role on the IAM console". and add the AmazonEC2FullAccess permission with "Add permissions" button at the top right. This will allow the function to manage your EC2 instance.

Now, go to EventBridge by searching for it in the console and select your region. Create a new rule on the right hand side, with any name and description, and click next. It will ask you for a event pattern, select EC2 as the service, event type as "EC2 State-change Notifaction" then specific state(s) as terminated. Under targets, go to select target type and select Lambda. Choose your ConvertGP3toHDD function. This will appear in the Lambda dashboard.

Now, you need to "test" it, which will do the actual process the script is meant to do. Click on the test tab and then select invoke in the top right corner. Your logs should essentially say what the images says.

Lambda logs

It may fail at first after waiting a while. This is because there is not enough time for the function to fully create a snapshot. However, snapshots will finish much faster after the initial snapshot. Be careful after installing games or other tasks which require large files as these things may make the Lambda function time out.

Step 4.5: Automating instance start/setup

Technically, the above step won't work that the absolute best if you don't set this up, but it is still optional if you're not looking to run a script on your local computer to make the instance startup easier. AKA not going directly to AWS to start up your instance.

To begin, install AWS onto your personal computer and then type aws configure in your computers terminal which is PowerShell for Windows by default and for MacOS or Linux, that would be through your Terminal application. There is also the Windows Terminal app from the Microsoft Store which works too. It will prompt for the credentials which you used for video drivers earlier (the root keys). Set the output to json and the region to your Workstation's region.

AWS configure terminal

Image by TechGuru

This part depends on your computer, if you're using Windows, you can use this script:

For Windows

# Define parameters
$InstanceName = 'yourinstancename'
$TargetInstanceType = 'g4dn.xlarge'  # Specify the desired instance type
$SecurityGroupId = 'sg-xxxxxxxxxxxxxx'  # Specify the desired security group ID
$Region = 'yourinstanceregion'  # Specify your AWS region

# Get the volume ID for the instance
$VolumeId = aws ec2 describe-volumes --filters "Name=tag:Name,Values=$InstanceName" `
    "Name=status,Values=available" "Name=volume-type,Values=sc1" `
    --query "Volumes[0].VolumeId" --output text --region $Region

if (-not $VolumeId -or $VolumeId -eq "None") {
    Write-Error "Error: Unable to retrieve valid volume ID for the specified instance name."
    exit 1
}

# Create a snapshot of the volume
$SnapshotId = aws ec2 create-snapshot --volume-id $VolumeId --description "Snapshot for AMI" `
    --query "SnapshotId" --output text --region $Region

# Wait for the snapshot to be completed
Write-Host "Waiting for snapshot $SnapshotId to complete..."
do {
    Start-Sleep -Seconds 10
    $SnapshotStatus = aws ec2 describe-snapshots --snapshot-ids $SnapshotId --query "Snapshots[0].State" --output text --region $Region
} while ($SnapshotStatus -ne 'completed')

# Register an AMI from the snapshot
$AMIId = aws ec2 register-image --block-device-mappings '[{"DeviceName":"/dev/sda1","Ebs":{"SnapshotId":"'$SnapshotId'","VolumeType":"gp3","DeleteOnTermination":false}}]' `
    --name "AMI for $InstanceName" --description "AMI created from cold HDD snapshot" `
    --architecture x86_64 --root-device-name "/dev/sda1" --query "ImageId" --output text --region $Region

Write-Host "AMI $AMIId registered from snapshot $SnapshotId"

# Request a Spot instance with specified parameters
$RequestId = aws ec2 request-spot-instances --instance-count 1 --type "one-time" `
    --launch-specification "{\"ImageId\":\"$AMIId\",\"InstanceType\":\"$TargetInstanceType\",\"SecurityGroupIds\":[\"$SecurityGroupId\"]}" `
    --query "SpotInstanceRequests[0].SpotInstanceRequestId" --output text --region $Region

Write-Host "Spot instance requested with ID: $RequestId"

# Wait for the Spot instance request to be fulfilled
Write-Host "Waiting for Spot instance request $RequestId to be fulfilled..."
do {
    Start-Sleep -Seconds 10
    $SpotRequestStatus = aws ec2 describe-spot-instance-requests --spot-instance-request-ids $RequestId --query "SpotInstanceRequests[0].Status.Code" --output text --region $Region
} while ($SpotRequestStatus -ne 'fulfilled')

# Get the Spot instance ID
$InstanceId = aws ec2 describe-spot-instance-requests --spot-instance-request-ids $RequestId --query "SpotInstanceRequests[0].InstanceId" --output text --region $Region

if (-not $InstanceId -or $InstanceId -eq "None") {
    Write-Error "Error: Unable to get spot instance ID, it may not have provisioned. Try again later."
    Write-Host "Deleting AMI and snapshot..."
    aws ec2 deregister-image --image-id $AMIId --region $Region
    aws ec2 delete-snapshot --snapshot-id $SnapshotId --region $Region
    exit 1
}

Write-Host "Spot instance $InstanceId launched from AMI $AMIId with instance type $TargetInstanceType, gp3 volume, and security group $SecurityGroupId"

# Tagging the launched instance with the specified name tag
aws ec2 create-tags --resources $InstanceId --tags Key=Name,Value=$InstanceName --region $Region

# Get GP3 volume
$GP3VolumeId = aws ec2 describe-volumes --filters "Name=volume-type,Values=gp3" --query "Volumes[0].VolumeId" --output text --region $Region

# Tagging the storage with the specified name tag
aws ec2 create-tags --resources $GP3VolumeId --tags Key=Name,Value=$InstanceName --region $Region

Write-Host "Tags added to the launched instance and storage."

Write-Host "Deleting cold HDD, AMI, and snapshot..."
aws ec2 deregister-image --image-id $AMIId --region $Region
aws ec2 delete-snapshot --snapshot-id $SnapshotId --region $Region
aws ec2 delete-volume --volume-id $VolumeId --region $Region

For Linux/MacOS

#!/bin/bash

INSTANCE_NAME='yourinstancename'
TARGET_INSTANCE_TYPE='g4dn.xlarge' # You'll want to put the literal name like "g5.xlarge" 
SECURITY_GROUP_ID='sg-xxxxxxxxxxxxxx'  # Specify the desired security group ID
REGION='yourinstanceregion'  # Specify your AWS region

# Get the volume ID for the instance
VOLUME_ID=$(aws ec2 describe-volumes --filters "Name=tag:Name,Values=$INSTANCE_NAME" \
    "Name=status,Values=available" "Name=volume-type,Values=sc1" \
    --query "Volumes[0].VolumeId" --output text --region $REGION)

if [ -z "$VOLUME_ID" ] || [ "$VOLUME_ID" == "None" ]; then
    echo "Error: Unable to retrieve valid volume ID for the specified instance name."
    exit 1
fi

# Create a snapshot of the volume
SNAPSHOT_ID=$(aws ec2 create-snapshot --volume-id $VOLUME_ID --description "Snapshot for AMI" \
    --query "SnapshotId" --output text --region $REGION)

# Wait for the snapshot to be completed
aws ec2 wait snapshot-completed --snapshot-ids $SNAPSHOT_ID --region $REGION

# Register an AMI from the snapshot
AMI_ID=$(aws ec2 register-image  --block-device-mappings '[{"DeviceName":"/dev/sda1","Ebs":{"SnapshotId":"'$SNAPSHOT_ID'","VolumeType":"gp3","DeleteOnTermination":false}}]' \
    --name "AMI for $INSTANCE_NAME" --description "AMI created from cold HDD snapshot" \
    --architecture x86_64 \
    --root-device-name "/dev/sda1" --query "ImageId" --output text --region $REGION)

echo "AMI $AMI_ID registered from snapshot $SNAPSHOT_ID"

# Request a Spot instance with specified parameters
REQUEST_ID=$(aws ec2 request-spot-instances --instance-count 1 --type "one-time" \
    --launch-specification "{\"ImageId\":\"$AMI_ID\",\"InstanceType\":\"$TARGET_INSTANCE_TYPE\",\"SecurityGroupIds\":[\"$SECURITY_GROUP_ID\"]}" \
    --query "SpotInstanceRequests[0].SpotInstanceRequestId" --output text --region $REGION)

echo "Spot instance requested with ID: $REQUEST_ID"

# Wait for the Spot instance request to be fulfilled
aws ec2 wait spot-instance-request-fulfilled --spot-instance-request-ids $REQUEST_ID --region $REGION

# Get the Spot instance ID
INSTANCE_ID=$(aws ec2 describe-spot-instance-requests --spot-instance-request-ids $REQUEST_ID \
    --query "SpotInstanceRequests[0].InstanceId" --output text --region $REGION)

if [ -z "$INSTANCE_ID" ] || [ "$INSTANCE_ID" == "None" ]; then
    echo "Error: Unable to get spot instance ID, it may not have provisioned. Try again later."
    echo "Deleting AMI and snapshot..."
    aws ec2 deregister-image --image-id $AMI_ID --region $REGION
    aws ec2 delete-snapshot --snapshot-id $SNAPSHOT_ID --region $REGION
    exit 1
fi

echo "Spot instance $INSTANCE_ID launched from AMI $AMI_ID with instance type $TARGET_INSTANCE_TYPE, gp3 volume, and security group $SECURITY_GROUP_ID"

# Tagging the launched instance with the specified name tag
aws ec2 create-tags --resources $INSTANCE_ID --tags Key=Name,Value=$INSTANCE_NAME --region $REGION

# Get GP3 volume
GP3VOLUME_ID=$(aws ec2 describe-volumes --filters "Name=volume-type,Values=gp3" --query "Volumes[0].VolumeId" --output text --region $REGION)

# Tagging the storage with the specified name tag
aws ec2 create-tags --resources $GP3VOLUME_ID --tags Key=Name,Value=$INSTANCE_NAME --region $REGION

echo "Tags added to the launched instance and storage."

echo "Deleting cold HDD, AMI, and snapshot..."
aws ec2 deregister-image --image-id $AMI_ID --region $REGION
aws ec2 delete-snapshot --snapshot-id $SNAPSHOT_ID --region $REGION
aws ec2 delete-volume --volume-id $VOLUME_ID --region $REGION  

Simply use a program like Notepad++ to edit the values at the beginning of the file with your information. Save the file and name it something simple like "start-server" and give it the extension .sh on Mac or Linux, or .ps1 on Windows. Now, whenever you're ready to start your server, just launch this script on your computer by going to your terminal and typing ./start-server.sh or .ps1 depending on your OS.

Be careful however as if anything goes wrong during this process and you didn't notice, you could spend more money than you intended or delete data you meant to keep. Just monitor the AWS dashboard every once and a while to verify everything is in working order/what you expected.

Step 5: Tailscale or DDNS

Tailscale is a VPN service that fixes the big issue of unpairing on Moonlight and provides a consistent connection address for NiceDCV. Using Tailscale has a quite a few benefits while also securing access to your server to only your Tailscale device. It even allows you to use no IP address at all as you can make a connection with just a word via their MagicDNS service. It is recommended you use Tailscale as the means to connect to your instance as latency should be low.

After 12 months, you will run out a free IPV4 address and AWS will charge you to have one assigned on EC2. The below script may not work in scenarios where IPV6 is the only address type assigned to the instance and Tailscale would be recommended.

Tailscale is not always a perfect solution because you also may not be allowed to use a VPN in all circumstances. In that case, just use Duck DNS (which also happens to be hosted on AWS) and create an account by clicking the many sign in options at the top. Once logged in, go to domains and create a new domain like below.

Duck DNS new domain

Create a new Lambda function and select the latest version of Python (this was tested with version 3.11) then from there, go to the "Code" tab, if not already there, and paste this code into the empty box. Don't forget to change the domain, token and region, along with supplying your instance name. Then click deploy to save the code.

import http.client
import boto3
import botocore

duckdns_domain = 'yourdomain' # Just the domain part, exclude duckdns.org
duckdns_token = 'yourtoken'
instance_region = 'yourinstanceregion'
instance_name = 'yourinstancename'

def lambda_handler(event, context):
    try:
        # Connect to region
        ec2 = boto3.client('ec2',region_name=instance_region)
        res_client = boto3.resource('ec2', region_name=instance_region)
        
        # Get instance name
        response = ec2.describe_instances(Filters=[{'Name': 'tag:Name', 'Values': [instance_name]}])

        # Extract the public IP address from the response
        public_ip_address = response['Reservations'][0]['Instances'][0]['PublicIpAddress']

        # Print the public IP address
        print(f"Sending a request to DuckDNS to update public IP address of {instance_name} which currently is {public_ip_address}")

        # Update DuckDNS using URL
        duckdns_update_url = f"https://www.duckdns.org/update?domains={duckdns_domain}&token={duckdns_token}&ip={public_ip_address}"

        # Make HTTP request to update DuckDNS record
        connection = http.client.HTTPSConnection("www.duckdns.org")
        connection.request("GET", duckdns_update_url)
        response = connection.getresponse()

    except botocore.exceptions.ClientError as e:
        error_code = e.response['Error']['Code']
        print(f"AWS Error: {error_code} - {e.response['Error']['Message']}")
    except Exception as e:
        print(f"Error: {e}")

Then go to the "General configuration" tab and click on edit.

General configuration tab

Then, select your role at the bottom with the "View the xxxxxxx-role-xxxx role on the IAM console". and add the AmazonEC2FullAccess permission with "Add permissions" button at the top right. This will allow the function to manage your EC2 instance.

Now, go to EventBridge by searching for it in the console and select your region. Create a new rule on the right hand side, with any name and description, and click next. It will ask you for a event pattern, select EC2 as the service, event type as "EC2 State-change Notification" then specific state(s) as running and/or pending. Under targets, go to select target type and select Lambda. Choose your DuckDNS function.

Now you can go to Moonlight and use your Duck DNS address to connect!

Connection to Moonlight example

Enjoy your instance!

Finally, the end result is a very cost effective workstation instance on AWS, which runs a very powerful workstation GPU ready for lots of things. We've accomplished this first when we requested the best priced instances by using spot instances. Then we used this script to quickly setup the basics needed for our instance to run and view a low latency picture from our machine. Afterward, we installed the best video drivers for our use case. We moved onto making our storage cheaper and making it easier to start and setup an instance every time we want to use it. Then we finally set up a connection to our server even if we use dynamic IP addresses so we don't pay for a static one.