This document demonstrates how to run the code in this repo, and general tips for developing it. See CONTRIBUTING.md for guidelines about commit style, issues, PRs, and more.
🚨 NOTE 🚨: Big rewrites during our Beta stage has made some details below outdated. We intend to properly update the diagram(s) and descriptions when the new architecture stabilizes.
The folders in this repository relate to each other as follow:
graph BT
classDef default fill:#1f1f1f,stroke-width:0,color:white;
classDef binary fill:#f25100,font-weight:bolder,stroke-width:0,color:white;
classDef external fill:#343434,font-style:italic,stroke:#f25100,color:white;
deployer:::binary
cargo-shuttle:::binary
common
codegen
e2e
proto
provisioner:::binary
service
gateway:::binary
auth:::binary
user([user service]):::external
gateway --> common
gateway -.->|starts instances| deployer
gateway -->|key| auth
auth -->|jwt| gateway
deployer --> proto
deployer -.->|calls| provisioner
service ---> common
deployer --> common
cargo-shuttle --->|"features = ['builder']"| service
deployer -->|"features = ['builder']"| service
cargo-shuttle --> common
service --> codegen
proto ---> common
provisioner --> proto
e2e -.->|starts up| gateway
e2e -.->|starts up| auth
e2e -.->|calls| cargo-shuttle
user -->|"features = ['codegen']"| service
cargo-shuttle
is the CLI used by users to initialize, deploy and manage their projects and services on Shuttle.gateway
starts and manages instances ofdeployer
. It proxies commands from the user sent via the CLI on port 8001 and traffic on port 8000 to the correct instance ofdeployer
.auth
is an authentication service that creates and manages users. In addition to that, requests to thegateway
that contain an api-key or cookie will be proxied to theauth
service where it will be converted to a JWT for authorization between internal services (like adeployer
requesting a database fromprovisioner
).deployer
is a service that runs in its own docker container, one per user project. It manages a project's deployments and state.provisioner
is a service used for requesting databases and other resources, using a gRPC API.admin
is a simple CLI used for admin tasks like reviving and stopping projects, as well as requesting and renewing SSL certificates through the acme client in thegateway
.
common
contains shared models and functions used by the other libraries and binaries.codegen
contains our proc-macro code which gets exposed to user services fromruntime
. The redirect throughruntime
is to make it available under the prettier name ofshuttle_runtime::main
.runtime
contains thealpha
runtime, which embeds a gRPC server and aLoader
in a service with theshuttle_runtime::main
macro. The gRPC server receives commands fromdeployer
likestart
andstop
. TheLoader
sets up a tracing subscriber and provisions resources for the users service. Theruntime
crate also contains theshuttle-next
binary, which is a standalone runtime binary that is started by thedeployer
or thecargo-shuttle
CLI, responsible for loading and startingshuttle-next
services.service
is where our specialService
trait is defined. Anything implementing thisService
can be loaded by thedeployer
and the local runner incargo-shuttle
. Theservice
library also defines theResourceBuilder
andFactory
trais which are used in our codegen to provision resources. Theservice
library also contains the utilities we use for compiling users crates withcargo
.proto
contains the gRPC server and client definitions to allowdeployer
to communicate withprovisioner
, and to allow thedeployer
andcargo-shuttle
cli to communicate with thealpha
andshuttle-next
runtimes.resources
contains various implementations ofResourceBuilder
, which are consumed in thecodegen
to provision resources.services
contains implementations ofService
for common Rust web frameworks. Anything implementingService
can be deployed by shuttle.e2e
just contains tests which starts up thedeployer
in a container and then deploys services to it usingcargo-shuttle
.
Lastly, the user service
is not a folder in this repository, but is the user service that will be deployed by deployer
.
You can use Docker and docker-compose to test Shuttle locally during development. See the Docker install and docker-compose install instructions if you do not have them installed already.
Note for Windows: The current Makefile does not work on Windows systems by itself - if you want to build the local environment on Windows you could use Windows Subsystem for Linux. Additional Windows considerations are listed at the bottom of this page. Note for Linux: When building on Linux systems, if the error unknown flag: --build-arg is received, install the docker-buildx package using the package management tool for your particular system.
Clone the Shuttle repository (or your fork):
git clone [email protected]:shuttle-hq/shuttle.git
cd shuttle
Note: We need the git tags for the local development workflow, but they may not be included when you clone the repository. To make sure you have them, run
git fetch upstream --tags
, where upstream is the name of the Shuttle remote repository.
The shuttle examples are linked to the main repo as a git submodule, to initialize it run the following commands:
git submodule init
git submodule update
You should now be ready to setup a local environment to test code changes to core shuttle
packages as follows:
From the root of the Shuttle repo, build the required images with:
USE_PANAMAX=disable make images
Note: The stack uses panamax by default to mirror crates.io content. We do this in order to avoid overloading upstream mirrors and hitting rate limits. After syncing the cache, expect to see the panamax volume take about 100GiB of space. This may not be desirable for local testing. Therefore
USE_PANAMAX=disable make images
is recommended.
The images get built with cargo-chef and therefore support incremental builds (most of the time). So they will be much faster to re-build after an incremental change in your code - should you wish to deploy it locally straight away.
You can now start a local deployment of Shuttle and the required containers with:
USE_PANAMAX=disable make up
Note:
make up
can also be run withSHUTTLE_DETACH=disable
, which means docker-compose will not be run with--detach
. This is often desirable for local testing.Note:
make up
can also be run withHONEYCOMB_API_KEY=<api_key>
if you have a honeycomb.io account and want to test the instrumentation of the services. This is mostly used only by the internal team.Note: Other useful commands can be found within the Makefile.
The API is now accessible on localhost:8000
(for app proxies) and localhost:8001
(for the control plane). When running cargo run -p cargo-shuttle
(in a debug build), the CLI will point itself to localhost
for its API calls.
In order to test local changes to the library crates, you may want to add patches to a .cargo/config.toml
file.
(See Overriding Dependencies for more)
The simplest way to generate this file is:
./scripts/apply-patches
See the files apply-patches.sh and patches.toml for how it works.
Note: cargo and rust-analyzer will add
[[patch.unused]]
lines at the bottom of Cargo.lock when patches are applied. These should not be included in commits/PRs. The easiest way to get rid of them is to comment out all the patch lines in.cargo/config.toml
, and refresh cargo/r-a.
Before we can login to our local instance of Shuttle, we need to create a user.
The following command inserts a user into the auth
state with admin privileges:
# the --key needs to be 16 alphanumeric characters
docker compose -f docker-compose.rendered.yml -p shuttle-dev exec auth /usr/local/bin/shuttle-auth --db-connection-uri=postgres://postgres:postgres@control-db init-admin --name admin --key dh9z58jttoes3qvt
Note: if you have done this already for this container you will get a "UNIQUE constraint failed" error, you can ignore this.
Login to Shuttle service in a new terminal window from the root of the Shuttle directory:
# the --api-key should be the same one you inserted in the auth state
cargo run -p cargo-shuttle -- login --api-key dh9z58jttoes3qvt
Note: The above commands, along with other useful scripts, can be found in scripts. The above lines can instead be done with
source scripts/local-admin.sh
.
Finally, before gateway will be able to work with some projects, we need to create a user for it.
The following command inserts a gateway user into the auth
state with deployer privileges:
docker compose -f docker-compose.rendered.yml -p shuttle-dev exec auth /usr/local/bin/shuttle-auth --db-connection-uri=postgres://postgres:postgres@control-db init-deployer --name gateway --key gateway4deployes
Create a new project based on one of the examples. This will prompt your local gateway to start a deployer container. Then, deploy it.
cargo run -p cargo-shuttle -- --wd examples/rocket/hello-world project start
cargo run -p cargo-shuttle -- --wd examples/rocket/hello-world deploy
Test if the deployment is working:
# the Host header should match the URI from the deploy output
curl -H "Host: hello-world-rocket-app.unstable.shuttleapp.rs" localhost:8000
# ^^^^^^^^^^^^^^^^^^^^^^ this will be the project name
View logs from the current deployment:
# append `--follow` to this command for a live feed of logs
cargo run -p cargo-shuttle -- --wd examples/rocket/hello-world logs
The steps outlined above starts all the services used by Shuttle locally (ie. both gateway
and deployer
). However, sometimes you will want to quickly test changes to deployer
only. To do this replace make up
with the following:
# if you didn't do this already, make the images
USE_PANAMAX=disable make images
# then generate the local docker-compose file
make docker-compose.rendered.yml
# then run
docker compose -f docker-compose.rendered.yml up provisioner resource-recorder logger otel-collector builder
This starts the provisioner and the auth service, while preventing gateway
from starting up.
Make sure an admin user is inserted into auth and that the key is used by cargo-shuttle. See above.
We're now ready to start a local run of the deployer:
OTLP_ADDRESS=http://127.0.0.1:4317 cargo run -p shuttle-deployer -- --provisioner-address http://localhost:3000 --auth-uri http://localhost:8008 --resource-recorder http://localhost:8007 --builder-uri http://localhost:8009 --logger-uri http://localhost:8010 --proxy-fqdn local.rs --admin-secret dh9z58jttoes3qvt --local --project-id "01H7WHDK23XYGSESCBG6XWJ1V0" --project <name>
The <name>
needs to match the name of the project that will be deployed to this deployer.
This is the Cargo.toml
or Shuttle.toml
name for the project.
Now that your local deployer is running, you can run commands against it using the cargo-shuttle CLI. It needs to have the same project name as the one you submitted when starting the deployer above.
cargo run -p cargo-shuttle -- --wd <path> --name <name> deploy
If using Docker Desktop on Linux, you might find adding this to your shell config useful to make bollard
find the Docker socket:
if which docker > /dev/null && [ $(docker context show) = "desktop-linux" ]; then
export DOCKER_HOST="unix://$HOME/.docker/desktop/docker.sock"
fi
If you want to use Podman instead of Docker, you can configure the build process with environment variables.
Use Podman for building container images by setting DOCKER_BUILD
.
export DOCKER_BUILD=podman build --network host
The Shuttle containers expect access to a Docker-compatible API over a socket. Expose a rootless Podman socket either
-
with systemd, if your system supports it,
systemctl start --user podman.service
-
or by running the server directly.
podman system service --time=0 unix://$XDG_RUNTIME_DIR/podman.sock
Then set DOCKER_SOCK
to the absolute path of the socket (no protocol prefix).
export DOCKER_SOCK=$(podman system info -f "{{.Host.RemoteSocket.Path}}")
Finally, configure Docker Compose. You can either
-
configure Docker Compose to use the Podman socket by setting
DOCKER_HOST
(including theunix://
protocol prefix),export DOCKER_HOST=unix://$(podman system info -f "{{.Host.RemoteSocket.Path}}")
-
or install Podman Compose and use it by setting
DOCKER_COMPOSE
.export DOCKER_COMPOSE=podman-compose
If you are using nftables
, even with iptables-nft
, it may be necessary to install and configure the nftables CNI plugins
We use Stripe to start Pro subscriptions and verify them with a Stripe client that needs a secret key. The STRIPE_SECRET_KEY
environment variable
should be set to test upgrading a user to Pro tier, or to use a Pro tier feature with cargo-shuttle CLI. On a local environment, that requires
setting up a Stripe account and generating a test API key. Auth can still be initialised and used without a Stripe secret key, but it will fail
when retrieving a user, and when we'll verify the subscription validity.
Shuttle has reasonable test coverage - and we are working on improving this every day. We encourage PRs to come with tests. If you're not sure about what a test should look like, feel free to get in touch.
To run the unit tests for a specific crate, from the root of the repository run:
# replace <crate-name> with the name of the crate to test, e.g. `shuttle-common`
cargo test -p <crate-name> --all-features --lib -- --nocapture
To run the integration tests for a specific crate (if it has any), from the root of the repository run:
# replace <crate-name> with the name of the crate to test, e.g. `cargo-shuttle`
cargo test -p <crate-name> --all-features --test '*' -- --nocapture
# To streamline and customize the integration tests for CI, bash scripts with these commands and
# any crate-specific commands can be run with:
./<folder>/integration.sh
# and/or
./<folder>/integration_docker.sh
# Tests that need to start docker containers are separated for CI efficiency, and thus use scripts with `_docker` suffix.
To run the end-to-end (e2e) tests, from the root of the repository run:
make test
Note: Running all the end-to-end tests may take a long time, so it is recommended to run individual tests shipped as part of each crate in the workspace first.
Currently, if you try to use 'make images' on Windows, you may find that the shell files cannot be read by Bash/WSL. This is due to the fact that Windows may have pulled the files in CRLF format rather than LF1, which causes problems with Bash as to run the commands, Linux needs the file in LF format.
Thankfully, we can fix this problem by simply using the git config core.autocrlf
command to change how Git handles line endings. It takes a single argument:
git config --global core.autocrlf input
This should allow you to run make images
and other Make commands with no issues.
If you need to change it back for whatever reason, you can just change the last argument from 'input' to 'true' like so:
git config --global core.autocrlf true
After you run this command, you should be able to checkout projects that are maintained using CRLF (Windows) again.