-
Notifications
You must be signed in to change notification settings - Fork 58
Use Case: PRT with Digital Ocean Cloud
Digital Ocean is a provider of cheap cloud computing instances making it an ideal candidate to host both Plex Media Server as well as a platform for the Plex Remote Transcoder (PRT) in creating near-on-demand slave transcoding instances. Outlined below is a specific path we used to set up PMS/PRT in Digital Ocean. It is by no means the only way to implement this type of solution and some thoughts for improving/modifying this can be found at the end. For the impatient, you can skip to the list of scripts/API calls we use below and adjust for your own use.
It is assumed the reader of this document is already familiar with:
- Plex Media Server
- Plex Remote Transcoder
- Digital Ocean
- Is comfortable working within a Linux scripting environment as well as a comfort with making API/webservices calls
In this particular case we have a Plex Media Server running in an instance (droplet) in Digital Ocean that has PRT installed as a master. We then create a new droplet to be used as a single 'slave' instance to PRT. This is the real key to this because this slave will be used as our "master blueprint" to spin up as many slaves as we want - in theory, on demand (or near on demand as you'll see below). Once you have a slave configured and working we create a "snapshot" of this droplet. From here it's just a simple matter of monitoring load on your master PMS/PRT system, then using the Digital Ocean API to spin up a slave when needed, and adding it to your PRT configuration as an available slave. Conversely you can then remove slave droplets as needed, to reduce your overall cost.
In working with Digital Ocean and spinning up slave droplets for PMS/PRT, we found that it really wasn't possible to truly spin up a droplet "on demand" in the sense of firing up a droplet right at the time PMS needs a transcode session. It takes a few minutes for Digital Ocean to clone your snapshot at request time (3-5 minutes at times) and bootstrap it and in our case where the slave droplet needs to mount access to media (Amazon Cloud Drive, outside the scope of this document) it was several minutes to get a slave instance truly up and running - far longer than a user of Plex would be willing to wait for media to start playing. What we do instead here, is ensure that we always have MORE slaves in operation than we need in real-time -- usually a buffer of 2 was sufficient -- that way an idle slave would always be available for an incoming transcode session and guaranteeing that each transcode session received a dedicated slave droplet.
Spin up a droplet in Digital Ocean and get it working with your master PMS/PRT system as a slave. In our implementation we made use of Digital Ocean's private network capabilities and always made sure our master/slaves were in the same datacenter to minimize latency and maximize privacy. When creating the slave, try to only set up the bare minimum server footprint possible. In our case just a basic $10/month droplet (1G ram) droplet was sufficient for slaves and we really didn't have to install much else - other than NFS support, and anything else you might need package-wise for your particular case. We use Ubuntu but any flavor of Linux should be possible to work with.
We set up the slave droplet to download a 'slave_setup' script from our master system at boot up, and run it. Curl or wget should work fine here, or however you want to "pull" this from your master system. This allowed for a central script to be maintained that any slave coming online would use to bootstrap itself, instead of burying that setup in the slave droplet snapshot itself. This makes it easy to make changes to slave startup without having to recreate your slave snapshot again.
PRT will use SSH with public keys to access slaves and run commands on them. Set up a private/public ssh key pair and set up your slave instance to allow ssh access to the slave with this key pair. Then, use this public key to create a saved ssh key in Digital Ocean. The cool bit here is that when you later spin up droplets on demand for use as slaves, they can be brought up with this ssh key in place - so that your new droplet can be accessed via ssh without a password from the master. Full instructions on setting this up with both your droplet and then saving the ssh keys in Digital Ocean account here: https://www.digitalocean.com/community/tutorials/how-to-use-ssh-keys-with-digitalocean-droplets
Assuming you have a working slave prototype (i.e., after rebooting, it comes back up ready to take on transcode requests), shut down your working slave prototype and in the Digital Ocean console and create a 'snapshot' with a name of your choosing (my_awesome_prt_slave) etc. You'll want to acquire the Digital Ocean "ID" of this snapshot, as you'll refer to this ID when spinning up a slave. As well, you'll need the "ID" of the pair of ssh keys you've saved at Digital Ocean. Both these ID's are used when spinning up a new slave from the Digital Ocean API - scripts for querying the API to get these ID's can be found below.
The scripts we used here would maintain a 'flat file' of our slaves in operation, which were just CSV-style records that recorded the ID of the slave, the IP address Digital Ocean assigned to the slave (private) and the hostname we allocated to the slave.
What follows is a number of scripts we call explicitly, or from within other scripts to monitor load, spin up new slave droplets, and remove them when no longer needed.
You'll want to first obtain a Read/Write API key in your Digital Ocean account for use in these scripts. Instructions can be found here: https://cloud.digitalocean.com/settings/api/tokens
This command will return (in JSON) your list of snapshots (after you've created one). Use this output to find the "ID" of the snapshot you created above. This ID is used when you spin up a new slave.
#!/bin/bash
curl -s -X GET -H "Content-Type: application/json" -H "Authorization: Bearer YOUR_API_KEY" "https://api.digitalocean.com/v2/images/?private=true"
This command will return (in JSON) your list of saved/created ssh key pairs. You'll refer to this ID when spinning up a new slave droplet so that it's created with this key.
#!/bin/bash
curl -s -X GET -H "Content-Type: application/json" -H "Authorization: Bearer YOUR_API_KEY" "https://api.digitalocean.com/v2/account/keys"
This is just an example of what our flat-file list of slave droplets in operation looks like. Scripts that create or destroy a slave will maintain this list. It's in form of "ID,IP ADDRESS,HOSTNAME"
123456,10.1.1.5,my_slave_droplet1
123457,10.1.1.6,my_slave_droplet2
123458,10.1.1.7,my_slave_droplet3
This is a bash script, run at regular intervals by cron, to determine if a new slave is needed in the pool, based on the output of "prt sessions" compared to the number of slaves in our list. See the notes below on timing. We maintain a buffer of 2 - but this could be adjusted to fit your own needs. We name and hostname our slaves by just incrementing an integer to the end of the name/hostname. The script also makes use of the command line tool "jq" to filter JSON output in bash. A handy tool to have installed if you want to pull data our of your Digital Ocean API command responses. When added, a slave will be created (that script to follow) and it's details added to the slave_list flat file, the hosts file, and then adds the slave to PRT. We capture the ID of the created droplet, then wait a few seconds before querying Digital Ocean to give us the IP they assigned in the private pool. The extraction of the assigned IP for the new droplet is pretty coarse here, and certainly could be improved. We're scraping the 10.136 private address in the output which is specific to the nyc1 region in D.O. You're results might differ, so be aware of that. Create a droplet and then query that droplet's info with the show_droplet script to see what that output looks like. The ssh-keygen command at the end simply cleans up any cached keys for slaves created with the same name previously. Pretty sloppy and you could probably configure ssh/sshd to ignore this anyway, but it worked for us.
#!/bin/bash
sessions=`prt sessions |grep -c Host`
slavecount=`cat slave_list | wc -l`
echo "Sessions: $sessions"
echo "Slaves : $slavecount"
if [ `expr $slavecount - $sessions` -lt 2 ]; then
echo "Adding a slave..."
nextslave=`expr $slavecount + 2`
echo "Nextslave: $nextslave"
id=`./create_droplet.sh my_slave_droplet$nextslave`
sleep 10
ip=`./show_droplet.sh $id | jq '.droplet.networks.v4|map([.ip_address])' |grep 10.136 | cut -d """ -f 2`
echo "IP : $ip"
echo "$id,$ip,my_slave_droplet$nextslave" >> slave_list
echo "$ip my_slave_droplet$nextslave" >> /etc/hosts
echo "y" | prt add_host my_slave_droplet$nextslave 22 root
ssh-keygen -f "/root/.ssh/known_hosts" -R my_slave_droplet$nextslave
fi
This is a bash script, run at regular intervals by cron that will remove unnecessary slaves in order to maintain our buffer of 2 slaves. See the notes below on timing. It first checks, via the "prt sessions" command to see if the proposed slave to remove is busy transcoding. If not, it's removed. If it is, it simply exits quietly and waits for the next run.
#!/bin/bash
sessions=`prt sessions |grep -c Host`
slavecount=`cat slave_list | wc -l`
echo "Sessions: $sessions"
echo "Slaves : $slavecount"
if [ `expr $slavecount - $sessions` -gt 2 ] && [ $slavecount -gt 3 ]; then
# REMOVE A SLAVE
id=`cat slave_list |tail -1 | cut -d "," -f 1`
host=`cat slave_list |tail -1 | cut -d "," -f 3`
if [ `prt sessions |grep -c $host` -eq 0 ]; then
echo "Removing a slave..."
./destroy_droplet.sh $id
prt remove_host $host
cat /etc/hosts |grep -v $host >> /tmp/hosts.tmp
rm -rf /etc/hosts
mv /tmp/hosts.tmp /etc/hosts
cat slave_list |grep -v $host >> /tmp/slave.tmp
rm -rf slave_list
mv /tmp/slave.tmp slave_list
else
echo "Looks like $host is busy right now...."
fi
exit 0
fi
This is a bash script that takes a "name" for a droplet as input, and then calls the Digital Ocean API. The output of the script is the ID of the created droplet, which is captured into a variable in the check_add script. Note that we create the droplet in the NYC1 region of Digital Ocean, but this could be repalced with the region you would prefer. This script also requires the ID of the slave snapshot you created earlier and the ssh_keys ID, creates the droplet with private networking and calls for a "1gb" size droplet.
#!/bin/bash
result=`curl -s -X POST -H "Content-Type: application/json" -H "Authorization: Bearer YOUR_API_ID" -d "{\"name\":\"$1\",\"region\":\"nyc1\",\"size\":\"1gb\",\"image\":\"YOUR_SNAPHOT_ID_HERE\",\"ssh_keys\":[\"YOUR_SSH_KEYS_ID\"],\"backups\":false,\"ipv6\":false,\"user_data\":null,\"private_networking\":true}" "https://api.digitalocean.com/v2/droplets"`
id=`echo $result | jq '.droplet.id'`
echo $id
This bash script is called by the add script above to retrive the newly created droplet's details in D.O. - primarily so we can find out what IP address was assigned to the droplet so we can record it, and configure appropriately. The output is in JSON and the jq command line tool is instrumental here to parse the output.
#!/bin/bash
curl -s -X GET -H "Content-Type: application/json" -H "Authorization: Bearer YOUR_API_ID" "https://api.digitalocean.com/v2/droplets/$1"
This script is a single curl command that will take a droplet ID as input and destroy it in Digital Ocean. It is called by the check_drop.sh script above.
#!/bin/bash
curl -s -X DELETE -H "Content-Type: application/json" -H "Authorization: Bearer YOUR_API_ID" "https://api.digitalocean.com/v2/droplets/$1"
-
These scripts are pretty rough and non-elegant. Lots of room for clean up, efficiency and improvement here. Results were great for our implementation though.
-
In our implementation, we run the check_add script every 4 minutes. This gives us the opportunity to add slaves pretty quickly as demand rises - maintaining that 2-slave buffer. In our PMS user load, this is sufficient to keep a slave or two handy as load increases. In the odd case that sessions start coming on quickly, the worse cases is we end up with a couple of sessions on one slave at that moment. By contrast, we set the check_drop script to run only every 35 minutes. The reason for this is that while we can add slaves pretty quickly, we'll destroy them at a slow pace. The number of sessions in play will have to be lower for a while before we bleed off capacity - that way we don't run into a hysterisis problem where slaves are constantly being created and destroyed in a short period just because users are jumping in and out. We also run the add script on an "even" minute while the drop on an "odd" minute. This keeps the add/drop from running at the same time - even though the math run in the scripts should prevent an add and drop operation from happening at the same time.
-
It probably isn't necessary to give your slaves a hostname. They could be added to prt as simply an IP address eliminating the need to maintain your hosts file on the master as well with the IP/slavehost mappings. When creating a droplet you have to give the droplet a name, and we found it convenient to name the droplet the same as the hostname.
-
Our slaves run on a $10/mo 1Gb ram droplet. This seemed to be sized appropriately but I'd like to see if transcode sessions would run well on $5/mo droplet slaves. This shaves cost even more. D.O. allocates only half the network transfer for this slave per month, but that might not matter as I believe this figure gets reset each time you spin up the droplet.
-
We noticed that plex transcode application needs to have access to the Plex Media Server metadata for your library. Without it, transcode sessions will fail, but it won't be immediately clear why. You'll just get the "media is not avaialble" message from Plex. Most folks would have this metadata in the "default" location which is a path that setting up PRT requires anyway, so it becomes a moot point - however, in our case, our metadata collection was very large and needed to be configured in plex to use other mounted storage. You'll have to have this same location available, via the same path, to your slaves. i.e., if you have your metadata mounted on your master in /plex_data -- then /plex_data on your slaves must be set up as well - in what ever way you want to do that (NFS back to your master is a good a method as any).
-
We use a one-slave-per-transcode-session philosophy here. However, PRT naturally supports the ability to have multiple transcode sessions running on a slave, as it simply assigns the task to the least-busy slave available. In our case, it was always a fresh slave, but it would be interesting to know if it was fiscally cheaper in the end to run heavier, more expensive slave droplets, but allow them to run multiple transcode sessions.
-
The method here relies on some basic API calls bringing up a pre-configured slave/droplet image, across a number of scripts. It would be pretty easy to refine this to run the whole show from a single script, run at intervals, that handles both creation/deletion of slave droplets and does more configuration of the slave at creation time, using a pre-existing Digital Ocean base image for an OS, rather than using a "snapshot". Then you wouldn't be charged for the storage of that snapshot blueprint, though it's a fairly trivial cost.
-
Your results/implementation may vary but this writeup does give a good couple of examples of calling the D.O. api for dynamic droplet creation.