Skip to content

miikka75/remote-docker

 
 

Repository files navigation

Run your Docker containers for free in the cloud and for unlimited time

Having a place other than your own machine to run docker containers can be handy. It helps you to speed up the development of your personal projects or relieve the heavy lifting from your machine.
Recently I was introduced Oracle cloud, and its free tier for an unlimited time.

Check the supported always free services:

  • 2 AMD based Compute VMs with 1/8 OCPU** and 1 GB memory each
  • 4 Arm-based Ampere A1 cores and 24 GB of memory usable as one VM or up to 4 VMs
  • 2 Block Volumes Storage, 200 GB total
  • 10 GB Object Storage - Standard
  • 10 GB Object Storage - Infrequent Access
  • 10 GB Archive Storage
  • Resource Manager: managed Terraform
  • 5 OCI Bastions

As you can see they are very generous with their free tier, and we can do a lot with this.

Create your remote Docker service

After cloning this repository

1. Create an Oracle Cloud Infrastructure account (just follow this link).

If you want access OCI compute instance using a (dy.fi) domain name Configure variables dyfi_username, dyfi_password and dyfi_hostname in variables.tf.

6. Create remote environment

As you can suspect by now, we are going to use terraform to provision our Docker service. Basically terraform will create one VM instance as big as the free tear allow us, and let it ready so we can ssh into the machine and with docker ready to start our containers.

Check variables in variables.tf file to get desired compute instance and user names for SSH access.

Execute following commands:

oci session authenticate
  • Select server in which you registered
  • Authenticate with web browser
  • Store session profile with a name or DEFAULT
terraform init
terraform apply

When terraform apply command is executed, following files are generated in the root of this repository:

  • ssh_docker-remote
    • Configuration file which path can be added to ~/ssh/config file
    Include /path/to/remote-docker/ssh_docker-remote
    
  • id_rsa and id_rsa.pub
    • Enables key-based authentication in remote host.Password-based authentication is disabled.

If you get an error like this:

timeout -- last error: dial tcp IP_ADDRESS:22: connect: connection refused

try to add the id_rsa generated by terraform to your ssh-agent (ssh-add id_rsa) and run terraform apply one more time.

If you were able to reach this point, you now have a VM running that you can run whatever you want (check Oracle's terms and conditions for this service). At the end of the process terraform is configured to show the public IP of newly created instance.

Connect to the remote server using ssh (using IP or host name) and run docker ps to check if docker is running as expected.

ssh docker-remote1
docker ps

Permit access from internet

Although configuration in networks.tf allows access to/from all ports (via ingress and egress rules), Oracle versions of virtual machines disable internet access on most ports with firewall. Ports in firewall must be opened separately. Set open_portsvariable in variables.tf file with ports which need to be exposed. cloudinit.tf will create a script with iptables rule to open those ports, which is executed in created virtual machine.

Deploying your container

OK, now you want to start running containers there from your local machine, but you also run some containers locally for whatever reason. So we need to do it in a way that is easy to switch between the two contexts. Docker has a tool for that embedded on its client, to check the contexts that you have available run:

docker context ls 

Let's create a context for our remote docker service:

docker context create remote --docker "host=ssh://docker@IP_PROVIDED_BY_TERRAFORM"

... or if ssh_docker-remote is Included in ~/ssh/config file ...

docker context create remote --docker "host=ssh://docker@<Host in ssh_docker-remote file>"

Now if you list your contexts again you should see your newly created context. The next step is to switch between contexts:

docker context use remote

If no errors were reported, now you can run any command that you would normally do locally. To check if it is working try to deploy a nginx container.

docker run -d -p 8080:80 nginx 

If remote can't be used (for any reason), which happens in WSL environment, you can tell docker explicitly on which context the container should be run.

docker --context remote run -d -p 80:80 nginx

After this you should be able to open on your browser: http://IP_PROVIDED_BY_TERRAFORM (or http://dyfi_hostname) and you should see the default index.html from docker.

Conclusion

This project was shamelessly based on Jérôme Petazzoni's project, where he describes how to build a Kubernetes cluster on Oracle infrastructure using the free tier. His project was very helpful to understand terraform and the configuration of Kubernetes. I highly recommend you to check it out.

I made this spinoff of his project because, it is great to learn how to deploy a K8S cluster, but it was not that useful for me to have it laying around. Docker service is more useful to have it when you are developing.

And I can see how that can be helpful, especially when your machine is at its limit.ssh

About

Run docker for free on Oracle cloud

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • HCL 95.7%
  • Smarty 4.3%