Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert qubes.WindowIconUpdater to a socket service #37

Merged
merged 5 commits into from
Apr 4, 2020

Conversation

pwmarcz
Copy link
Contributor

@pwmarcz pwmarcz commented Mar 14, 2020

As a result, we have a single service running for all the VMs.

Start it using /etc/xdg/autostart to ensure it's running in the
proper graphical session.

Depends on Image.get_from_stream_async, implemented in
QubesOS/qubes-linux-utils#52.

@pwmarcz
Copy link
Contributor Author

pwmarcz commented Mar 14, 2020

Ouch, this is not right, I think the cache needs to depend on domain ID...

@pwmarcz pwmarcz changed the title Convert qubes.WindowIconUpdater to a socket service WIP: Convert qubes.WindowIconUpdater to a socket service Mar 14, 2020
@pwmarcz pwmarcz changed the title WIP: Convert qubes.WindowIconUpdater to a socket service Convert qubes.WindowIconUpdater to a socket service Mar 15, 2020
@marmarek
Copy link
Member

Not only cache, but also window ID map. Or at least the map should now have two fields (vm, remote window id). IMO it would be cleaner to have separate object instances for each incoming connection and keep domain-specific data there.
A little issue could be with dispatching local X events to the right instance, but we do already retrieve local window -> VM name mapping (search for atom_vmname), so it shouldn't be much of an issue.

@pwmarcz
Copy link
Contributor Author

pwmarcz commented Mar 15, 2020

What do you think about the current code? I converted the "remote ID" to (domain_name, integer) tuple. This takes care of both cache and mapping.

I think creating separate object instances would make it less clean, because the X server connection is global and not related to either a specific domain or connection. You would need to filter X messages (which we receive globally) down to the right instance, and handle creating/destroying these instances.

With the current code, we store very little state per connection (just the local variables: domain and color), and we keep open the possibility of multiple connections from icon-sender, for instance per-request instead of per-session (that's not to say we should do that, but I think it shows things are more decoupled).

@marmarek
Copy link
Member

Ok, makes sense.

window-icon-updater/icon-receiver Outdated Show resolved Hide resolved
window-icon-updater/icon-receiver Outdated Show resolved Hide resolved
@pwmarcz
Copy link
Contributor Author

pwmarcz commented Mar 18, 2020

By the way, master is still broken for me:

Mar 18 20:20:39 dom0 qubes.WindowIconUpdater+-work[7561]: Traceback (most recent call last):
Mar 18 20:20:39 dom0 qubes.WindowIconUpdater+-work[7561]:   File "/usr/lib/qubes/icon-receiver", line 361, in <module>
Mar 18 20:20:39 dom0 qubes.WindowIconUpdater+-work[7561]:     rcvd = IconReceiver()
Mar 18 20:20:39 dom0 qubes.WindowIconUpdater+-work[7561]:   File "/usr/lib/qubes/icon-receiver", line 72, in __init__
Mar 18 20:20:39 dom0 qubes.WindowIconUpdater+-work[7561]:     self.conn = xcb.connect()
Mar 18 20:20:39 dom0 qubes.WindowIconUpdater+-work[7561]:   File "/usr/lib/python3.7/site-packages/xcffib/__init__.py", line 526, in __init__
Mar 18 20:20:39 dom0 qubes.WindowIconUpdater+-work[7561]:     self.invalid()
Mar 18 20:20:39 dom0 qubes.WindowIconUpdater+-work[7561]:   File "/usr/lib/python3.7/site-packages/xcffib/__init__.py", line 558, in invalid
Mar 18 20:20:39 dom0 qubes.WindowIconUpdater+-work[7561]:     raise ConnectionException(err)
Mar 18 20:20:39 dom0 qubes.WindowIconUpdater+-work[7561]: xcffib.ConnectionException: xcb connection errors because of socket, pipe and other stream errors.

@marmarek
Copy link
Member

Does it happen after logoff+logon?

@pwmarcz
Copy link
Contributor Author

pwmarcz commented Mar 19, 2020

No, right after reboot.

@marmarek
Copy link
Member

It might be because of the qrexec-policy-daemon. Since it's now this service calling qrexec-client, not qrexec-daemon, the actual service is called a) as root, b) with limited environment variables.

@marmarek
Copy link
Member

Since QubesOS/qubes-core-qrexec#41 is merged, I assume this should make use of it, right?

@marmarek
Copy link
Member

This also needs adjusted dependencies, for all the new things it requires:

  • qubes-core-qrexec >= 4.1.5 (socket services)
  • qubes-utils >= 4.1.4 (async image)

Both Fedora and Debian.

pwmarcz added a commit to pwmarcz/qubes-gui-agent-linux that referenced this pull request Mar 28, 2020
This makes icon-sender handle GUI restart, same as qubes-gui and
audio. However, in this case we are not running raw vchan, but
Qubes RPC, so the procedure is more complicated: we start a
qrexec-client-vm subprocess, and restart it if it breaks.

Needs icon-receiver to be a service with wait-for-session
(QubesOS/qubes-gui-daemon#37), and fix for socket services in dom0
(QubesOS/qubes-core-qrexec#42).
pwmarcz added a commit to pwmarcz/qubes-gui-agent-linux that referenced this pull request Mar 28, 2020
This makes icon-sender handle GUI restart, same as qubes-gui and
audio. However, in this case we are not running raw vchan, but
Qubes RPC, so the procedure is more complicated: we start a
qrexec-client-vm subprocess, and restart it if it breaks.

Needs icon-receiver to be a service with wait-for-session
(QubesOS/qubes-gui-daemon#37), and fix for socket services in dom0
(QubesOS/qubes-core-qrexec#42).
pwmarcz added a commit to pwmarcz/qubes-gui-agent-linux that referenced this pull request Mar 30, 2020
This makes icon-sender handle GUI restart, same as qubes-gui and
audio. However, in this case we are not running raw vchan, but
Qubes RPC, so the procedure is more complicated: we start a
qrexec-client-vm subprocess, and restart it if it breaks.

Needs icon-receiver to be a service with wait-for-session
(QubesOS/qubes-gui-daemon#37), and fix for socket services in dom0
(QubesOS/qubes-core-qrexec#42).
while True:
await event.wait()
event.clear()
self.handle_pending_events()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not simply connect handle_pending_events() as a reader function directly?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. Updated.

As a result, we have a single service running for all the VMs.
Start it using /etc/xdg/autostart to ensure it's running in the
proper graphical session.

Use (domain, ID) as remote IDs to prevent conflicts between
windows.

Depends on Image.get_from_stream_async, implemented in
QubesOS/qubes-linux-utils#52.
- qubes-core-qrexec >= 4.1.5 (for socket services)
- qubes-utils >= 4.1.4 (for Image.get_from_stream_async)
@pwmarcz
Copy link
Contributor Author

pwmarcz commented Mar 31, 2020

Ouch, there is one more edge case: creating a new machine while the process is running :)

Newly created domains, and color changes.
@marmarek marmarek merged commit e242064 into QubesOS:master Apr 4, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants