Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor service configuration from Salt (at provisioning) to systemd (at boot) #1004

Open
1 task done
cfm opened this issue Apr 25, 2024 · 17 comments
Open
1 task done

Comments

@cfm
Copy link
Member

cfm commented Apr 25, 2024

  • I have searched for duplicates or related issues

Description

@zenmonkeykstop asked this morning whether #1001 is sufficient for all VM-level configuration, not just keys and values. I think we'll still want to use systemd units with ConditionHost conditions to enable individual services based on the hostname configured by Salt (and enforced by dom0 tests).

How will this impact SecureDrop/SecureDrop Workstation users?

No user implications.

How would this affect the SecureDrop Workstation threat model?

Along with #1001, this assumes we are comfortable with runtime (boot-time) configuration of VMs' roles and services, except for secrets.

Tasks:

@deeplow
Copy link
Contributor

deeplow commented May 6, 2024

Instead of changing behavior based on hostname, we could qubes services instead. The only differences are:

  • use of ConditionPathExists= instead of ConditionHost=
  • we could enable / disable services on our saltstack configuration

Advantages:

  • more "qubes-native"
  • immune to qube rename side-effects (if they ever happen)

Disadvantages:

  • salt being salt - salt is good at setting state, but not removing it. If we ever needed to disable a qubes service, we'd need to explicitly remove it in the salt configuration

@deeplow
Copy link
Contributor

deeplow commented May 6, 2024

This started being tackled a while ago via #840 and its cousins (freedomofpress/securedrop-builder#396 and freedomofpress/securedrop-client#1677) .

I can try to bring it back into reviewable state after discussing with @zenmonkeykstop, but first we should converge on a strategy. Should we advanced with the original proposal of forking on hostname or via Qubes services? Whichever way we decide we should at least be consistent and document this practice.

@cfm
Copy link
Member Author

cfm commented May 7, 2024

Switching to Qubes services makes sense, @deeplow. Arguably it extends #1001's configuration injection to enabling services by analogy, which I like.

#1004 (comment):

Disadvantages:

  • salt being salt - salt is good at setting state, but not removing it. If we ever needed to disable a qubes service, we'd need to explicitly remove it in the salt configuration

I guess I take this for granted for as long as we're using Salt in this way at all. :-)

@legoktm
Copy link
Member

legoktm commented May 7, 2024

I think I lean in the direction of using ConditionHost, because I think it better fits our goal of keeping in-VM stuff in packages and using salt for dom0 things.

As a practical case is if we want to add a new "sd-log-whatever" service in a VM, we'd have to also do a corresponding workstation patch to enable the Qubes service through dom0, and gate the client release on the workstation one.

@legoktm
Copy link
Member

legoktm commented May 7, 2024

Re: immune to qube rename side-effects (if they ever happen) - one consideration could be to set a sd-app Qubes service on the sd-app VM, sd-log service on sd-log VM, etc. and then all of the services in VM can use ConditionPathExists=/var/run/qubes-service/<VM name>, which does feel a little less brittle than the hostname, but also allows for a 1:1 mapping between systemd service and VM. I'm not sure that extra level indirection is worth it though.

@deeplow
Copy link
Contributor

deeplow commented May 7, 2024

What if we instead call them with init or bootstrap prefix? (init-sd-app or bootstrap-sd-app). In my mind that makes a bit clearer what the service goal is. Because in reality this a bit of an abuse of what services are for.

@legoktm
Copy link
Member

legoktm commented May 7, 2024

To clarify, that was just my suggestion if we wanted to work around the "qube rename side-effects" problem; I still prefer ConditionHost.

Because in reality this a bit of an abuse of what services are for.

Agreed.

@deeplow
Copy link
Contributor

deeplow commented May 7, 2024

As a practical case is if we want to add a new "sd-log-whatever" service in a VM, we'd have to also do a corresponding workstation patch to enable the Qubes service through dom0, and gate the client release on the workstation one.

OK. I see now what you mean by an extra level of indirection. Even though this would be at most one service per qube, having this service stated across two repos adds unnecessary release overhead.

The counter-argument is that now the vm-name is set in two different repos. I think in theory within a particular qube it should not care about what it is called by the outside. But that's a wider discussion. So I am fine either way. But other ideas may come up in the meeting we're having later.

@legoktm
Copy link
Member

legoktm commented May 7, 2024

But other ideas may come up in the meeting we're having later.

Marek's point about wanting to set it in multiple VMs was pretty convincing to me. In theory we could do something like:

ConditionHost|=sd-app
ConditionHost|=sd-log

But I think that is less clean than the one ConditionPathExists. So, I'm down to move forwards with Qubes services, and if we end up running into problems, we can always revisit/adjust course.

@deeplow
Copy link
Contributor

deeplow commented May 8, 2024

To summarize some of the (new) arguments for the use of services (as opposed to hostnames):

  • code reuse - the the moment we have something which runs in multiple qubes, it becomes a more to maintain if they are per-host systemd services (or we could do like Kunal noted above)
  • debugging - sometimes when debugging issues we may need to clone a VM. While services will keep on working, the hostname will change and so, some things will magically break and waste debug time.

One important detail that Marek noted when implementing these services is to hook them up before the qrexec. This ensures that it's before the user's session and most other things.

@deeplow
Copy link
Contributor

deeplow commented May 21, 2024

From my calculations, the biggest bottleneck to provisioning is the need to provision files in app qubes.

Breakdown of what salt is provisioning (in VMs)

  sd-gpg:
    - sd-gpg-files:
      echo "export QUBES_GPG_AUTOACCEPT=28800" | tee /home/user/.profile
      copy sd-journalist.sec
      sudo -u user gpg --import /home/user/.gnupg/sd-journalist.sec

  sd-app:
    - sd-app-config:
      - copy config with specific submission key fingerprint (will be via qubesdb after https://github.com/freedomofpress/securedrop-client/pull/1883)
    - sd-mime-handling:
      ln -s /home/user/.local/share/applications/mimeapps.list /opt/sdw/mimeapps.list.{{ vm_name }}
      ln -s /home/user/.mailcap /opt/sdw/mailcap.default

  sd-whonix:
    - sd-whonix-hidserv-key (will be moved to qubesdb https://github.com/freedomofpress/securedrop-workstation/issues/1013#issuecomment-2088746620)

  'sd-fedora-39-dvm,sys-usb':
    - match: list
    - sd-usb-autoattach-add

  sd-viewer:
    - sd-mime-handling:
      ln -s /home/user/.local/share/applications/mimeapps.list /opt/sdw/mimeapps.list.{{ vm_name }}
      ln -s /home/user/.mailcap /opt/sdw/mailcap.default

  sd-devices-dvm:
      - sd-mime-handling:
        ln -s /opt/sdw/mimeapps.list.{{ vm_name }} /home/user/.local/share/applications/mimeapps.
        ln -s /home/user/.mailcap /opt/sdw/mailcap.default

  sd-proxy:
    - sd-proxy-files:
      cp sd-proxy.yaml
    - sd-mime-handling:
      ln -s /home/user/.local/share/applications/mimeapps.list /opt/sdw/mimeapps.list.default
      ln -s /home/user/.mailcap /opt/sdw/mailcap.default

Proposal

Secret-provisioning is what we can't avoid provisioning at the moment, but all else can go into templates / packages and provisioning on-boot. So for now my proposal would be to:

Make disposable + provision via systemd + qubes services:

  • sd-proxy
  • sd-devices-dvm (duh! already disposable)
  • sd-viewer (duh! already disposable)

Provision via systemd + qubes services:

  • sd-app

Impact: 4 less qubes that need provisoning with minor code changes.

@legoktm
Copy link
Member

legoktm commented May 21, 2024

Make disposable + provision via systemd + qubes services:

  • sd-proxy

Once #1035 lands, proxy is fully ready to be disposable! (I'm not sure why it has the mime handling enabled, nothing in that VM should be opening other files...)

@zenmonkeykstop
Copy link
Contributor

Wasn't there mime handling config added in sd-proxy specifically to avoid it opening files?

@rocodes
Copy link
Contributor

rocodes commented May 24, 2024

Wasn't there mime handling config added in sd-proxy specifically to avoid it opening files?

Yes: https://github.com/freedomofpress/securedrop-workstation/blob/d72c73ceb90db81bc8c93ee6c8312e4e9a3f9122/sd-proxy/mimeapps.list

sidebar: istr Marek mentioning a better way to deny this kind of functionality rather than trying to compete with all the places that mime handling could be introduced, and rather than having to specify every filetype, which has been a source of errors for us in the past. qubes-core-agent-linux does contain some mimetype overriding: it looks like they ship both mime-override and xdg-override (look in /usr/share/qubes/ for respective directories) so it would be cool if we could do something similar.

But in any case for the purposes of this PR, I think we could either use the systemd approach that we're planning for other VMs, or just create the symbolic link to the "default" mime handling (which I think is just used for the proxy?) in the deb postinst and then override it in the other vms.

@deeplow
Copy link
Contributor

deeplow commented May 24, 2024

I have move the mime-handling conversation to its own separate issue to keep this one focused how to approach this systemd provisioning in general. I hope that's OK. (I should have created that issue anyways as I did for the logging one).

@deeplow
Copy link
Contributor

deeplow commented May 24, 2024

So for now my proposal would be to:

Make disposable + provision via systemd + qubes services:

  • sd-proxy
  • sd-devices-dvm
  • sd-viewer

Provision via systemd + qubes services:

  • sd-app

Duh, I was forgetting that sd-devices and sd-viewer were already disposable. So only sd-proxy can become disposable.

@cfm
Copy link
Member Author

cfm commented May 29, 2024

#1051, specifically ebcabf6, demonstrates using a Qubes service as the trigger for a ConditionPathExists-based systemd service that applies further configuration from QubesDB.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants