-
Notifications
You must be signed in to change notification settings - Fork 398
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deploying multiple containers on the same host #44
Comments
I like the idea of having the role be part of the container name 👍. Need to ensure "web" is default when there's nothing else. |
Two more things:
|
You shouldn't have multiple containers of the same role running on the same server. Don't want to support that. Concurrency needs to come from each of those containers internally (WEB_CONCURRENCY for puma, for example). Interesting point re: destinations. I do like the ability to start everything on a single box, yes. So respecting destinations sounds good. We can just add that to the name for the destinations, and nothing for the baseline 👍 |
Just a thought here. Shouldn't app: foo
image: 37s/hey
services:
web: # foo-web service
servers:
- 1.2.3.4
job: # foo-job service
servers:
- 1.2.3.4
cmd: bundle exec sidekiq and for a single service (no more roles): # foo-web service
app: foo
image: 37s/hey
servers:
- 1.2.3.4
|
They're the same service, because they're running the same image. If they run a different image, they're an accessory. |
So we can't run web and job on the same host? |
Great, so if in use, the role (web, job, …) and destination (west, staging, …) would be part of the name. Should we also support manually specifying an option |
Don't want to add configuration flags until we have a proven case that details how we can't do without. Every configuration point is cognitive overhead. It should be there because it would suck if it wasn't, not just because we can. |
I want to host my staging server on my local raspberry pi and would like to run sidekiq and web on the same host. |
Yeah, I but that the new naming becomes: |
I like the naming |
@gshiva: Traefik takes care of this. As long as you're using |
Also when you say the naming becomes |
@ankitsinghaniyaz: It's not implemented right now. |
Supporting running everything in the same box would be great, to give an example, I run currently a pet project app in a very cheap VM using dokku, web, jobs, postgres and redis. For 5USD a month I can keep running this app and others if I need in this same box. I'd love to do that with mrsk since the declarative nature of it is fantastic 👌 |
Yes, happy to see it. I've been thinking about how to do it. Will require some rejigging. But if anyone wants to take it on, have a swing. Just use a scalpel, not a butcher's knife 😄 |
I would like to use the same docker image (which implies the same Dockerfile) to be deployed for different environments. The flow would be similar to `service.v3-> "dev env:8000", service.v2->"staging env:8080", service.v3->"demo env: 9080". Ideally the port would be dynamically allocated and returned to the user requesting the deploy. |
@gshiva: You could have different destinations with different Traefik |
Is this the general direction we want to go with roles? Show PoC diffdiff --git lib/mrsk/commands/app.rb lib/mrsk/commands/app.rb
index 4b1de91..8355e13 100644
--- lib/mrsk/commands/app.rb
+++ lib/mrsk/commands/app.rb
@@ -1,144 +1,144 @@
class Mrsk::Commands::App < Mrsk::Commands::Base
def run(role: :web)
role = config.role(role)
docker :run,
"--detach",
"--restart unless-stopped",
"--log-opt", "max-size=#{MAX_LOG_SIZE}",
- "--name", service_with_version,
+ "--name", service_with_version_and_role(nil, role.name),
*role.env_args,
*config.volume_args,
*role.label_args,
config.absolute_image,
role.cmd
end
- def start
- docker :start, service_with_version
+ def start(role: :web)
+ docker :start, service_with_version_and_role(nil, role)
end
- def stop(version: nil)
+ def stop(version: nil, role: :web)
pipe \
- version ? container_id_for_version(version) : current_container_id,
+ version ? container_id_for_version_and_role(version, role) : current_container_id(role),
xargs(docker(:stop))
end
def info
docker :ps, *service_filter
end
def logs(since: nil, lines: nil, grep: nil)
pipe \
current_container_id,
"xargs docker logs#{" --since #{since}" if since}#{" --tail #{lines}" if lines} 2>&1",
("grep '#{grep}'" if grep)
end
def follow_logs(host:, grep: nil)
run_over_ssh \
pipe(
current_container_id,
"xargs docker logs --timestamps --tail 10 --follow 2>&1",
(%(grep "#{grep}") if grep)
),
host: host
end
def execute_in_existing_container(*command, interactive: false)
docker :exec,
("-it" if interactive),
config.service_with_version,
*command
end
def execute_in_new_container(*command, interactive: false)
docker :run,
("-it" if interactive),
"--rm",
*config.env_args,
*config.volume_args,
config.absolute_image,
*command
end
def execute_in_existing_container_over_ssh(*command, host:)
run_over_ssh execute_in_existing_container(*command, interactive: true), host: host
end
def execute_in_new_container_over_ssh(*command, host:)
run_over_ssh execute_in_new_container(*command, interactive: true), host: host
end
- def current_container_id
- docker :ps, "--quiet", *service_filter
+ def current_container_id(role)
+ docker :ps, "--quiet", *service_filter(role)
end
def current_running_version
# FIXME: Find more graceful way to extract the version from "app-version" than using sed and tail!
pipe \
docker(:ps, "--filter", "label=service=#{config.service}", "--format", '"{{.Names}}"'),
%(sed 's/-/\\n/g'),
"tail -n 1"
end
def most_recent_version_from_available_images
pipe \
docker(:image, :ls, "--format", '"{{.Tag}}"', config.repository),
"head -n 1"
end
def all_versions_from_available_containers
pipe \
docker(:image, :ls, "--format", '"{{.Tag}}"', config.repository),
"head -n 1"
end
- def list_containers
- docker :container, :ls, "--all", *service_filter
+ def list_containers(role: :web)
+ docker :container, :ls, "--all", *service_filter(role)
end
def list_container_names
[ *list_containers, "--format", "'{{ .Names }}'" ]
end
- def remove_container(version:)
+ def remove_container(version:, role: :web)
pipe \
- container_id_for(container_name: service_with_version(version)),
+ container_id_for(container_name: service_with_version_and_role(version, role)),
xargs(docker(:container, :rm))
end
- def remove_containers
- docker :container, :prune, "--force", *service_filter
+ def remove_containers(role: :web)
+ docker :container, :prune, "--force", *service_filter(role)
end
def list_images
docker :image, :ls, config.repository
end
- def remove_images
- docker :image, :prune, "--all", "--force", *service_filter
+ def remove_images(role: :web)
+ docker :image, :prune, "--all", "--force", *service_filter(role)
end
private
- def service_with_version(version = nil)
+ def service_with_version_and_role(version = nil, role = :web)
if version
- "#{config.service}-#{version}"
+ "#{config.service}-#{role}-#{version}"
else
- config.service_with_version
+ config.service_with_version(role: role)
end
end
- def container_id_for_version(version)
- container_id_for(container_name: service_with_version(version))
+ def container_id_for_version_and_role(version, role)
+ container_id_for(container_name: service_with_version_and_role(version, role))
end
- def service_filter
- [ "--filter", "label=service=#{config.service}" ]
+ def service_filter(role)
+ [ "--filter", "label=service=#{config.service}", "label=role=#{role}" ]
end
end
I also thought about having a current context for the role so we don't have to pass it around, but that would probably make testing slightly more meh: class Mrsk::Current < ActiveSupport::CurrentAttributes
attribute :role
end
class Mrsk::Commander
# or replace with plain ol' instance variables instead of using CurrentAttributes
def current_role
Mrsk::Current.role
end
def with_role(role)
Mrsk::Current.set(role: role) { yield }
end
# …
end
class Mrsk::Cli::App < Mrsk::Cli::Base
def boot
using_version(options[:version] || most_recent_version_available) do |version|
MRSK.config.roles.each do |role|
MRSK.with_role(role) do
on(role.hosts) do |host|
# do something with MRSK.current_role
end
end
end
end
end
end A destination, on the other hand, should be simpler to implement as we don't have to pass it around, it's available through |
I like the idea of current. Don't think you need CurrentAttributes, though, since there's no resetting needed. Because it is indeed a bit of a hassle to pass the role around. But maybe just try both and do the a/b compare. That'll usually tell us what's best! Thanks for looking into this 💪 |
This is bigger than expected. I decided to stop pursuing the "current" idea for a moment because it'd introduce some kind of global state that'd make things intransparent and hurt test code. So, another idea is to pass the role to class Mrsk::Commander
# …
# before
# def app
# @app ||= Mrsk::Commands::App.new(config)
# end
def app(role: config.role(:web))
Mrsk::Commands::App.new(config, role: role)
end
end … and be able to just use |
The |
@gshiva you can have traefik rules defined in deploy.yml file. They would look something like this:
more docs are here - https://doc.traefik.io/traefik/routing/routers/#rule |
Thanks @vadimi - Still not familiar with mrsk. |
Has anyone tried doing it yet? |
Can I try this on the new version? |
Yes, as soon as a new release happens. |
When deploying an application on a single host, we can't have multiple containers right now. Having this in config/deploy.yml:
… will not work as mrsk tries to run two containers with the same name "foo-". Deploying on a single server sounds like a thing the library should support. Do we want the role as part of the docker name or have a config for it?
The text was updated successfully, but these errors were encountered: