Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deploying multiple containers on the same host #44

Closed
tbuehlmann opened this issue Feb 9, 2023 · 27 comments
Closed

Deploying multiple containers on the same host #44

tbuehlmann opened this issue Feb 9, 2023 · 27 comments

Comments

@tbuehlmann
Copy link
Contributor

When deploying an application on a single host, we can't have multiple containers right now. Having this in config/deploy.yml:

service: foo

servers:
  web:
    hosts:
      - 1.2.3.4
    cmd: bundle exec puma
  job:
    hosts:
      - 1.2.3.4
    cmd: bundle exec sidekiq

… will not work as mrsk tries to run two containers with the same name "foo-". Deploying on a single server sounds like a thing the library should support. Do we want the role as part of the docker name or have a config for it?

@dhh
Copy link
Member

dhh commented Feb 9, 2023

I like the idea of having the role be part of the container name 👍. Need to ensure "web" is default when there's nothing else.

@tbuehlmann
Copy link
Contributor Author

Two more things:

  1. Is having the same service/container a thing that should be supported? Like running Sidekiq twice on the same host for concurrency reasons. They'd need different names, too.

  2. Should we also respect destinations? I think of having a staging and demo environment on the same host, so they would need different names, too.

@dhh
Copy link
Member

dhh commented Feb 10, 2023

You shouldn't have multiple containers of the same role running on the same server. Don't want to support that. Concurrency needs to come from each of those containers internally (WEB_CONCURRENCY for puma, for example).

Interesting point re: destinations. I do like the ability to start everything on a single box, yes. So respecting destinations sounds good. We can just add that to the name for the destinations, and nothing for the baseline 👍

@kjellberg
Copy link
Contributor

kjellberg commented Feb 10, 2023

Just a thought here. Shouldn't web and job be considered two different services? Something like this would make sense:

app: foo
image: 37s/hey
services:
  web: # foo-web service
    servers:
      - 1.2.3.4
  job: # foo-job service
    servers:
      - 1.2.3.4
    cmd: bundle exec sidekiq

and for a single service (no more roles):

# foo-web service
app: foo
image: 37s/hey
servers:
  - 1.2.3.4

servers replaces hosts and will be an array of ips/hostnames in both cases.

@dhh
Copy link
Member

dhh commented Feb 10, 2023

They're the same service, because they're running the same image. If they run a different image, they're an accessory.

@brunoprietog
Copy link
Contributor

So we can't run web and job on the same host?

@tbuehlmann
Copy link
Contributor Author

Great, so if in use, the role (web, job, …) and destination (west, staging, …) would be part of the name. Should we also support manually specifying an option name (name: custom-name), too, for cases we don't want to support? Like an override to the default?

@dhh
Copy link
Member

dhh commented Feb 13, 2023

Don't want to add configuration flags until we have a proven case that details how we can't do without. Every configuration point is cognitive overhead. It should be there because it would suck if it wasn't, not just because we can.

@ankitsinghaniyaz
Copy link

I want to host my staging server on my local raspberry pi and would like to run sidekiq and web on the same host.

@dhh
Copy link
Member

dhh commented Feb 25, 2023

Yeah, I but that the new naming becomes: service-role[-destination]. So hey-web, hey-web-staging, hey-jobs, hey-jobs-staging.

@gshiva
Copy link

gshiva commented Mar 1, 2023

I like the naming service-role[-destination]. What about the ports, do we have to manually assign them? web-staging:8080 and so on?

@tbuehlmann
Copy link
Contributor Author

@gshiva: Traefik takes care of this. As long as you're using EXPOSE in your Dockerfile and use that port for your application, it'll just work.

@ankitsinghaniyaz
Copy link

Also when you say the naming becomes service-role[-destination], I'm not sure how to do that and if it's already possible?

@tbuehlmann
Copy link
Contributor Author

@ankitsinghaniyaz: It's not implemented right now.

@andersonkrs
Copy link

Supporting running everything in the same box would be great, to give an example, I run currently a pet project app in a very cheap VM using dokku, web, jobs, postgres and redis. For 5USD a month I can keep running this app and others if I need in this same box.

I'd love to do that with mrsk since the declarative nature of it is fantastic 👌

@dhh
Copy link
Member

dhh commented Mar 1, 2023

Yes, happy to see it. I've been thinking about how to do it. Will require some rejigging. But if anyone wants to take it on, have a swing. Just use a scalpel, not a butcher's knife 😄

@gshiva
Copy link

gshiva commented Mar 2, 2023

@gshiva: Traefik takes care of this. As long as you're using EXPOSE in your Dockerfile and use that port for your application, it'll just work.

I would like to use the same docker image (which implies the same Dockerfile) to be deployed for different environments. The flow would be similar to `service.v3-> "dev env:8000", service.v2->"staging env:8080", service.v3->"demo env: 9080". Ideally the port would be dynamically allocated and returned to the user requesting the deploy.

@tbuehlmann
Copy link
Contributor Author

@gshiva: You could have different destinations with different Traefik Host rules for that, no need to juggle with ports here.

@tbuehlmann
Copy link
Contributor Author

Is this the general direction we want to go with roles?

Show PoC diff
diff --git lib/mrsk/commands/app.rb lib/mrsk/commands/app.rb
index 4b1de91..8355e13 100644
--- lib/mrsk/commands/app.rb
+++ lib/mrsk/commands/app.rb
@@ -1,144 +1,144 @@
 class Mrsk::Commands::App < Mrsk::Commands::Base
   def run(role: :web)
     role = config.role(role)
 
     docker :run,
       "--detach",
       "--restart unless-stopped",
       "--log-opt", "max-size=#{MAX_LOG_SIZE}",
-      "--name", service_with_version,
+      "--name", service_with_version_and_role(nil, role.name),
       *role.env_args,
       *config.volume_args,
       *role.label_args,
       config.absolute_image,
       role.cmd
   end
 
-  def start
-    docker :start, service_with_version
+  def start(role: :web)
+    docker :start, service_with_version_and_role(nil, role)
   end
 
-  def stop(version: nil)
+  def stop(version: nil, role: :web)
     pipe \
-      version ? container_id_for_version(version) : current_container_id,
+      version ? container_id_for_version_and_role(version, role) : current_container_id(role),
       xargs(docker(:stop))
   end
 
   def info
     docker :ps, *service_filter
   end
 
 
   def logs(since: nil, lines: nil, grep: nil)
     pipe \
       current_container_id,
       "xargs docker logs#{" --since #{since}" if since}#{" --tail #{lines}" if lines} 2>&1",
       ("grep '#{grep}'" if grep)
   end
 
   def follow_logs(host:, grep: nil)
     run_over_ssh \
       pipe(
         current_container_id,
         "xargs docker logs --timestamps --tail 10 --follow 2>&1",
         (%(grep "#{grep}") if grep)
       ),
       host: host
   end
 
 
   def execute_in_existing_container(*command, interactive: false)
     docker :exec,
       ("-it" if interactive),
       config.service_with_version,
       *command
   end
 
   def execute_in_new_container(*command, interactive: false)
     docker :run,
       ("-it" if interactive),
       "--rm",
       *config.env_args,
       *config.volume_args,
       config.absolute_image,
       *command
   end
 
   def execute_in_existing_container_over_ssh(*command, host:)
     run_over_ssh execute_in_existing_container(*command, interactive: true), host: host
   end
 
   def execute_in_new_container_over_ssh(*command, host:)
     run_over_ssh execute_in_new_container(*command, interactive: true), host: host
   end
 
 
-  def current_container_id
-    docker :ps, "--quiet", *service_filter
+  def current_container_id(role)
+    docker :ps, "--quiet", *service_filter(role)
   end
 
   def current_running_version
     # FIXME: Find more graceful way to extract the version from "app-version" than using sed and tail!
     pipe \
       docker(:ps, "--filter", "label=service=#{config.service}", "--format", '"{{.Names}}"'),
       %(sed 's/-/\\n/g'),
       "tail -n 1"
   end
 
   def most_recent_version_from_available_images
     pipe \
       docker(:image, :ls, "--format", '"{{.Tag}}"', config.repository),
       "head -n 1"
   end
 
   def all_versions_from_available_containers
     pipe \
       docker(:image, :ls, "--format", '"{{.Tag}}"', config.repository),
       "head -n 1"
   end
 
 
-  def list_containers
-    docker :container, :ls, "--all", *service_filter
+  def list_containers(role: :web)
+    docker :container, :ls, "--all", *service_filter(role)
   end
 
   def list_container_names
     [ *list_containers, "--format", "'{{ .Names }}'" ]
   end
 
-  def remove_container(version:)
+  def remove_container(version:, role: :web)
     pipe \
-      container_id_for(container_name: service_with_version(version)),
+      container_id_for(container_name: service_with_version_and_role(version, role)),
       xargs(docker(:container, :rm))
   end
 
-  def remove_containers
-    docker :container, :prune, "--force", *service_filter
+  def remove_containers(role: :web)
+    docker :container, :prune, "--force", *service_filter(role)
   end
 
   def list_images
     docker :image, :ls, config.repository
   end
 
-  def remove_images
-    docker :image, :prune, "--all", "--force", *service_filter
+  def remove_images(role: :web)
+    docker :image, :prune, "--all", "--force", *service_filter(role)
   end
 
 
   private
-    def service_with_version(version = nil)
+    def service_with_version_and_role(version = nil, role = :web)
       if version
-        "#{config.service}-#{version}"
+        "#{config.service}-#{role}-#{version}"
       else
-        config.service_with_version
+        config.service_with_version(role: role)
       end
     end
 
-    def container_id_for_version(version)
-      container_id_for(container_name: service_with_version(version))    
+    def container_id_for_version_and_role(version, role)
+      container_id_for(container_name: service_with_version_and_role(version, role))
     end
 
-    def service_filter
-      [ "--filter", "label=service=#{config.service}" ]
+    def service_filter(role)
+      [ "--filter", "label=service=#{config.service}", "label=role=#{role}" ]
     end
 end

I also thought about having a current context for the role so we don't have to pass it around, but that would probably make testing slightly more meh:

class Mrsk::Current < ActiveSupport::CurrentAttributes
  attribute :role
end

class Mrsk::Commander
  # or replace with plain ol' instance variables instead of using CurrentAttributes
  def current_role
    Mrsk::Current.role
  end

  def with_role(role)
    Mrsk::Current.set(role: role) { yield }
  end

  # …
end

class Mrsk::Cli::App < Mrsk::Cli::Base
  def boot
    using_version(options[:version] || most_recent_version_available) do |version|
      MRSK.config.roles.each do |role|
        MRSK.with_role(role) do
          on(role.hosts) do |host|
            # do something with MRSK.current_role
          end
        end
      end
    end
  end
end

A destination, on the other hand, should be simpler to implement as we don't have to pass it around, it's available through MRSK.destination.

@dhh
Copy link
Member

dhh commented Mar 2, 2023

I like the idea of current. Don't think you need CurrentAttributes, though, since there's no resetting needed. Because it is indeed a bit of a hassle to pass the role around. But maybe just try both and do the a/b compare. That'll usually tell us what's best! Thanks for looking into this 💪

@tbuehlmann
Copy link
Contributor Author

This is bigger than expected. I decided to stop pursuing the "current" idea for a moment because it'd introduce some kind of global state that'd make things intransparent and hurt test code.

So, another idea is to pass the role to Mrsk::Commander#app like this:

class Mrsk::Commander
  # …

  # before
  # def app
  #   @app ||= Mrsk::Commands::App.new(config)
  # end
  def app(role: config.role(:web))
    Mrsk::Commands::App.new(config, role: role)
  end
end

… and be able to just use role inside the Mrsk::Commands::App. Seems easy enough, but there are quite a lot of places where roles are either not completely considered a first class concept yet or where a single role isn't required. Maybe passing the role where required isn't so bad after all.

@gshiva
Copy link

gshiva commented Mar 2, 2023

@gshiva: You could have different destinations with different Traefik Host rules for that, no need to juggle with ports here.

The Host rules have to be managed separately? Any doc on what needs to be done for pushing service:role image and making it appear as https://.../role

@vadimi
Copy link

vadimi commented Mar 3, 2023

@gshiva you can have traefik rules defined in deploy.yml file. They would look something like this:

labels:
  traefik.http.routers.my-service.rule: PathPrefix(`/my-prefix`)
  traefik.http.routers.my-service.middlewares: my-service-stripprefix
  traefik.http.middlewares.my-service-stripprefix.stripprefix.prefixes: /my-prefix

more docs are here - https://doc.traefik.io/traefik/routing/routers/#rule

@gshiva
Copy link

gshiva commented Mar 3, 2023

@gshiva you can have traefik rules defined in deploy.yml file. They would look something like this:

labels:
  traefik.http.routers.my-service.rule: PathPrefix(`/my-prefix`)
  traefik.http.routers.my-service.middlewares: my-service-stripprefix
  traefik.http.middlewares.my-service-stripprefix.stripprefix.prefixes: /my-prefix

more docs are here - https://doc.traefik.io/traefik/routing/routers/#rule

Thanks @vadimi - Still not familiar with mrsk. deploy.yml is something you specify in mrsk manifests I would assume. Is that correct?

@SyedMSawaid
Copy link

Has anyone tried doing it yet?

@dhh dhh closed this as completed Mar 24, 2023
@ankitsinghaniyaz
Copy link

Can I try this on the new version?

@intrip
Copy link
Member

intrip commented Mar 24, 2023

Yes, as soon as a new release happens.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants