diff --git a/developer/building/development-workflow.md b/developer/building/development-workflow.md index f125533f3..5f1581bd0 100644 --- a/developer/building/development-workflow.md +++ b/developer/building/development-workflow.md @@ -46,7 +46,7 @@ Don't forget to include the public PGP key you use to sign your tags. #### Prepare fresh version of kernel sources, with Qubes-specific patches applied -In qubes-builder/artifacts/sources/linux-kernel: +In `qubes-builder/artifacts/sources/linux-kernel`: ~~~ make prep @@ -65,7 +65,7 @@ drwxr-xr-x 6 user user 4096 Nov 21 20:48 kernel-3.4.18/linux-obj #### Go to the kernel tree and update the version -In qubes-builder/artifacts/sources/linux-kernel: +In `qubes-builder/artifacts/sources/linux-kernel`: ~~~ cd kernel-3.4.18/linux-3.4.18 @@ -73,14 +73,14 @@ cd kernel-3.4.18/linux-3.4.18 #### Changing the config -In kernel-3.4.18/linux-3.4.18: +In `kernel-3.4.18/linux-3.4.18`: ~~~ cp ../../config .config make oldconfig ~~~ -Now change the configuration. For example, in kernel-3.4.18/linux-3.4.18: +Now change the configuration. For example, in `kernel-3.4.18/linux-3.4.18`: ~~~ make menuconfig @@ -139,7 +139,7 @@ RPMs will appear in 1. `./qb package diff` - show uncommitted changes 2. ` ./qb repository check-release-status-for-component` and -`./qb repository check-release-status-for-template`- show version of each +`./qb repository check-release-status-for-template` - show version of each component/template (based on git tags) 3. `./qb package sign` - sign built packages 4. `./qb package publish` and `./qb package upload` - publish signed packages diff --git a/developer/building/qubes-builder-v2.md b/developer/building/qubes-builder-v2.md index 1210910b0..2ad60e9fb 100644 --- a/developer/building/qubes-builder-v2.md +++ b/developer/building/qubes-builder-v2.md @@ -39,18 +39,20 @@ if you don't know which executor to use, use docker. 2. Installing dependencies - If you want to use an app qube for developing, install dependencies in the template. + - If you want to use an app qube for developing, install dependencies in the template. If you are using a standalone, install them in the qube itself. Dependencies are specified in `dependencies-*. txt` files in the main builder directory, and you can install them easily in the following ways: 1. for Fedora, use: + ```shell $ sudo dnf install $(cat dependencies-fedora.txt) $ test -f /usr/share/qubes/marker-vm && sudo dnf install qubes-gpg-split ``` 2. for Debian (note: some Debian packages require Debian version 13 or later), use: + ```shell $ sudo apt install $(cat dependencies-debian.txt) $ test -f /usr/share/qubes/marker-vm && sudo apt install qubes-gpg-split @@ -60,6 +62,7 @@ if you don't know which executor to use, use docker. (re)start the development qube. 3. Clone the qubes-builder v2 repository into a location of your choice: + ```shell git clone https://github.com/QubesOS/qubes-builderv2 cd qubes-builderv2/ @@ -68,12 +71,14 @@ if you don't know which executor to use, use docker. 4. If you haven't previously used docker in the current qube, you need to set up some permissions. In particular, the user has to be added to the `docker` group: + ```shell $ sudo usermod -aG docker user ``` Next, **restart the qube**. 5. Finally, you need to generate a docker image: + ```shell $ tools/generate-container-image.sh docker ``` @@ -94,7 +99,7 @@ You can use one of the sample files from the `example-configs/` directory; for a more readable `builder.yml`, you can also include one of the files from that directory in your `builder.yml`. An example `builder.yml` is: -```yaml +``` # include configuration relevant for the current release include: - example-configs/qubes-os-r4.2.yml @@ -120,7 +125,6 @@ executor: type: docker options: image: "qubes-builder-fedora:latest" - ``` diff --git a/developer/code/code-signing.md b/developer/code/code-signing.md index c94b9e1bb..8074840d8 100644 --- a/developer/code/code-signing.md +++ b/developer/code/code-signing.md @@ -144,9 +144,11 @@ Although GitHub adds a little green `Verified` button next to the commit, the [s 1. Is the commit signed? If the commit is not signed, you can see the message + > policy/qubesos/code-signing — No signature found 2. If the commit is signed, the key is downloaded from a GPG key server. If you can see the following error message, please check if you have uploaded the key to a key server. + > policy/qubesos/code-signing — Unable to verify (no valid key found) ### No Signature Found diff --git a/developer/code/source-code.md b/developer/code/source-code.md index 85e5cf13a..83173e6c2 100644 --- a/developer/code/source-code.md +++ b/developer/code/source-code.md @@ -60,6 +60,7 @@ method you choose, you must [sign your code](/doc/code-signing/) before it can b * **Preferred**: Use GitHub's [fork & pull requests](https://guides.github.com/activities/forking/). + Opening a pull request on GitHub greatly eases the code review and tracking process. In addition, especially for bigger changes, it's a good idea to send a message to the [qubes-devel mailing list](/support/#qubes-devel) in order to notify people who diff --git a/developer/debugging/automated-tests.md b/developer/debugging/automated-tests.md index 1f72ede8f..51870d97a 100644 --- a/developer/debugging/automated-tests.md +++ b/developer/debugging/automated-tests.md @@ -267,11 +267,13 @@ It feeds off of the openQA test data to make graph plots. Here is an example: ![openqa-investigator-splitgpg-example.png](/attachment/doc/openqa-investigator-splitgpg-example.png) Some outputs: + - plot by tests - plot by errors - markdown Some filters: + - filter by error - filter by test name diff --git a/developer/general/developing-gui-applications.md b/developer/general/developing-gui-applications.md index 1ff0387c4..19e60a1c2 100644 --- a/developer/general/developing-gui-applications.md +++ b/developer/general/developing-gui-applications.md @@ -59,7 +59,7 @@ If error should be thrown, you need to provide the error code and name, for exam b'2\x00QubesNoSuchPropertyError\x00\x00No such property\x00' ``` -For details of particular calls, you can use [Extending the mock Qubes object]. +For details of particular calls, you can use [Extending the mock Qubes object](#extending-the-mock-qubes-object). ## Available mocks diff --git a/developer/general/gsoc.md b/developer/general/gsoc.md index a2990cd96..f1044aed6 100644 --- a/developer/general/gsoc.md +++ b/developer/general/gsoc.md @@ -174,45 +174,6 @@ If applicable, links to more information or discussions **Mentor**: [Frédéric Pierret](/team/) - ### Qubes Live USB @@ -252,26 +213,6 @@ details: [#1552](https://github.com/QubesOS/qubes-issues/issues/1552), **Mentor**: [Frédéric Pierret](/team/) - - ### LogVM(s) **Project**: LogVM(s) @@ -461,44 +402,6 @@ Some related discussion: **Mentor**: [Marek Marczykowski-Górecki](/team/) - - ### Android development in Qubes **Project**: Research running Android in Qubes VM (probably HVM) and connecting it to Android Studio @@ -538,12 +441,14 @@ Since the Admin API is continuously growing and changing, continuous security as A [Fuzzer](https://en.wikipedia.org/wiki/Fuzzing) would help to automate part of these assessments. **Expected results**: + - fully automated & extensible Fuzzer for parts of the Admin API - user & developer documentation **Difficulty**: medium **Prerequisites**: + - basic Python understanding - some knowledge about fuzzing & existing fuzzing frameworks (e.g. [oss-fuzz](https://github.com/google/oss-fuzz/tree/master/projects/qubes-os)) - a hacker's curiosity @@ -560,6 +465,7 @@ A [Fuzzer](https://en.wikipedia.org/wiki/Fuzzing) would help to automate part of **Brief explanation**: Since recently, Xen supports "unified EFI boot" which allows to sign not only Xen binary itself, but also dom0 kernel and their parameters. While the base technology is there, enabling it is a painful and complex process. The goal of this project is to integrate configuration of this feature into Qubes, automating as much as possible. See discussion in [issue #4371](https://github.com/QubesOS/qubes-issues/issues/4371) **Expected results**: + - a tool to prepare relevant boot files for unified Xen EFI boot - this includes collecting Xen, dom0 kernel, initramfs, config file, and possibly few more (ucode update?); the tool should then sign the file with user provided key (preferably propose to generate it too) - integrate it with updates mechanism, so new Xen or dom0 kernel will be picked up automatically - include a fallback configuration that can be used for troubleshooting (main unified Xen EFI intentionally does not allow to manipulate parameters at boot time) @@ -567,6 +473,7 @@ A [Fuzzer](https://en.wikipedia.org/wiki/Fuzzing) would help to automate part of **Difficulty**: hard **Knowledge prerequisite**: + - basic understanding of Secure Boot - Bash and Python scripting @@ -586,6 +493,7 @@ A [Fuzzer](https://en.wikipedia.org/wiki/Fuzzing) would help to automate part of **Difficulty**: medium **Knowledge prerequisite**: + - Python scripting - Basic knowledge of Linux system services management (systemd, syslog etc) diff --git a/developer/general/gsod.md b/developer/general/gsod.md index 0ca6b7afa..ad714e50d 100644 --- a/developer/general/gsod.md +++ b/developer/general/gsod.md @@ -58,9 +58,9 @@ Qubes OS regularly participates in Google Summer of Code and Google Season of Do ## Past Projects -You can view the project we had in 2019 in the [2019 GSoD archive](https://developers.google.com/season-of-docs/docs/2019/participants/project-qubes) and the [2019 writer's report](https://refre.ch/report-qubesos/). +You can view the project we had in 2019 in the [2019 GSoD archive](https://developers.google.com/season-of-docs/docs/2019/participants/project-qubes) and the [2019 writer's report](https://web.archive.org/web/20200928002746/https://refre.ch/report-qubesos/). -You can view the project we had in 2020 in the [2020 GSoD archive](https://developers.google.com/season-of-docs/docs/2020/participants/project-qubesos-c1e0) and the [2020 writer's report](https://gist.github.com/PROTechThor/bfe9b8b28295d88c438b6f6c754ae733). +You can view the project we had in 2020 in the [2020 GSoD archive](https://developers.google.com/season-of-docs/docs/2020/participants/project-qubesos-c1e0) and the [2020 writer's report](https://web.archive.org/web/20210723170547/https://gist.github.com/PROTechThor/bfe9b8b28295d88c438b6f6c754ae733). You can view the results of the project we had in 2023 [here](https://www.youtube.com/playlist?list=PLjwSYc73nX6aHcpqub-6lzJbL0vhLleTB). diff --git a/developer/releases/2_0/release-notes.md b/developer/releases/2_0/release-notes.md index e0a5751dd..a428a121c 100644 --- a/developer/releases/2_0/release-notes.md +++ b/developer/releases/2_0/release-notes.md @@ -68,7 +68,7 @@ Note: if the user has custom Template VMs (i.e. other than the default template, #### From Qubes R1 to R2 beta1 -If you're already running Qubes Release 1, you don't need to reinstall, it's just enough to update the packages in your Dom0 and the template VM(s). This procedure is described [here?](/doc/upgrade-to-r2/). +If you're already running Qubes Release 1, you don't need to reinstall, it's just enough to update the packages in your Dom0 and the template VM(s). This procedure is described [here](/doc/upgrade-to-r2/). #### From Qubes R1 or R2 Beta 1 to R2 beta2 diff --git a/developer/releases/4_0/release-notes.md b/developer/releases/4_0/release-notes.md index 56b99d647..1bed13026 100644 --- a/developer/releases/4_0/release-notes.md +++ b/developer/releases/4_0/release-notes.md @@ -9,7 +9,7 @@ title: Qubes R4.0 release notes New features since 3.2 ---------------------- -* Core management scripts rewrite with better structure and extensibility, [API documentation](https://dev.qubes-os.org/projects/core-admin/en/latest/index.html) +* Core management scripts rewrite with better structure and extensibility, [current API documentation](https://dev.qubes-os.org/projects/core-admin/en/latest/) and the documentation API index as a [webarchive](https://web.archive.org/web/20230128102821/https://dev.qubes-os.org/projects/qubes-core-admin/en/latest/) * [Admin API](/news/2017/06/27/qubes-admin-api/) allowing strictly controlled managing from non-dom0 * All `qvm-*` command-line tools rewritten, some options have changed * Renaming VM directly is prohibited, there is GUI to clone under new name and remove old VM diff --git a/developer/releases/4_2/release-notes.md b/developer/releases/4_2/release-notes.md index 6377844bf..044a022f9 100644 --- a/developer/releases/4_2/release-notes.md +++ b/developer/releases/4_2/release-notes.md @@ -57,17 +57,17 @@ We strongly recommend [updating Qubes OS](/doc/how-to-update/) immediately after - Qubes 4.2.2 includes a fix for [#8332: File-copy qrexec service is overly restrictive](https://github.com/QubesOS/qubes-issues/issues/8332). As explained in the issue comments, we introduced a change in Qubes 4.2.0 that caused inter-qube file-copy/move actions to reject filenames containing, e.g., non-Latin characters and certain symbols. The rationale for this change was to mitigate the security risks associated with unusual unicode characters and invalid encoding in filenames, which some software might handle in an unsafe manner and which might cause confusion for users. Such a change represents a trade-off between security and usability. - After the change went live, we received several user reports indicating more severe usability problems than we had anticipated. Moreover, these problems were prompting users to resort to dangerous workarounds (such as packing files into an archive format prior to copying) that carry far more risk than the original risk posed by the unrestricted filenames. In addition, we realized that this was a backward-incompatible change that should not have been introduced in a minor release in the first place. + - After the change went live, we received several user reports indicating more severe usability problems than we had anticipated. Moreover, these problems were prompting users to resort to dangerous workarounds (such as packing files into an archive format prior to copying) that carry far more risk than the original risk posed by the unrestricted filenames. In addition, we realized that this was a backward-incompatible change that should not have been introduced in a minor release in the first place. - Therefore, we have decided, for the time being, to restore the original (pre-4.2) behavior by introducing a new `allow-all-names` argument for the `qubes.Filecopy` service. By default, `qvm-copy` and similar tools will use this less restrictive service (`qubes.Filecopy +allow-all-names`) whenever they detect any files that would be have been blocked by the more restrictive service (`qubes.Filecopy +`). If no such files are detected, they will use the more restrictive service. + - Therefore, we have decided, for the time being, to restore the original (pre-4.2) behavior by introducing a new `allow-all-names` argument for the `qubes.Filecopy` service. By default, `qvm-copy` and similar tools will use this less restrictive service (`qubes.Filecopy +allow-all-names`) whenever they detect any files that would be have been blocked by the more restrictive service (`qubes.Filecopy +`). If no such files are detected, they will use the more restrictive service. - Users who wish to opt for the more restrictive 4.2.0 and 4.2.1 behavior can do so by modifying their RPC policy rules. To switch a single rule to the more restrictive behavior, change `*` in the argument column to `+` (i.e., change "any argument" to "only empty"). To use the more restrictive behavior globally, add the following "deny" rule before all other relevant rules: + - Users who wish to opt for the more restrictive 4.2.0 and 4.2.1 behavior can do so by modifying their RPC policy rules. To switch a single rule to the more restrictive behavior, change `*` in the argument column to `+` (i.e., change "any argument" to "only empty"). To use the more restrictive behavior globally, add the following "deny" rule before all other relevant rules: - ``` - qubes.Filecopy +allow-all-names @anyvm @anyvm deny - ``` + ``` + qubes.Filecopy +allow-all-names @anyvm @anyvm deny + ``` - For more information, see [RPC policies](/doc/rpc-policy/) and [Qube configuration interface](/doc/vm-interface/#qubes-rpc). + - For more information, see [RPC policies](/doc/rpc-policy/) and [Qube configuration interface](/doc/vm-interface/#qubes-rpc). - Beginning with Qubes 4.2, the recommended way to update Qubes OS via the command line has changed. Salt is no longer the preferred method, though it is still supported. Instead, `qubes-dom0-update` is recommended for updating dom0, and `qubes-vm-update` is recommended for updating templates and standalones. (The recommended way to update via the GUI has not changed. The Qubes Update tool is still the preferred method.) For more information, see [How to update](/doc/how-to-update/). diff --git a/developer/releases/version-scheme.md b/developer/releases/version-scheme.md index a96f9ca64..f241a70a7 100644 --- a/developer/releases/version-scheme.md +++ b/developer/releases/version-scheme.md @@ -71,7 +71,7 @@ When enough progress has been made, we announce the first stable release, e.g. `3.0.0`. This is not only a version but an actual release. It is considered stable, and we commit to supporting it according to our [support schedule](/doc/supported-releases/). Core components are branched at this -moment, and bug fixes are backported from the master branch. Please see [help, +moment, and bug fixes are backported from the main branch. Please see [help, support, mailing lists, and forum](/support/) for places to ask questions about stable releases. No major features or interface incompatibilities are to be included in this release. We release bug fixes as patch releases (`3.0.1`, @@ -173,7 +173,7 @@ We mark each component version in the repository by tag containing At the release of some release we create branches named like `release2`. Only bug fixes and compatible improvements are backported to these branches. These -branches should compile. All new development is done in `master` branch. This +branches should compile. All new development is done in `main` branch. This branch is totally unsupported and may not even compile depending on maintainer of repository. diff --git a/developer/services/dom0-secure-updates.md b/developer/services/dom0-secure-updates.md index ad922a888..2a5b7d521 100644 --- a/developer/services/dom0-secure-updates.md +++ b/developer/services/dom0-secure-updates.md @@ -33,7 +33,7 @@ Keeping Dom0 not connected to any network makes it hard, however, to provide upd The update process is initiated by [qubes-dom0-update script](https://github.com/QubesOS/qubes-core-admin-linux/blob/release2/dom0-updates/qubes-dom0-update), running in Dom0. -Updates (`*.rpm` files) are checked and downloaded by UpdateVM, which by default is the same as the firewall VM, but can be configured to be any other, network-connected VM. This is done by [qubes-download-dom0-updates.sh script](https://github.com/QubesOS/qubes-core-agent-linux/blob/release2/misc/qubes-download-dom0-updates.sh) (this script is executed using qrexec by the previously mentioned qubes-dom0-update). Note that we assume that this script might get compromised and fetch maliciously compromised downloads -- this is not a problem as Dom0 verifies digital signatures on updates later. The downloaded rpm files are placed in a ~~~/var/lib/qubes/dom0-updates~~~ directory on UpdateVM filesystem (again, they might get compromised while being kept there, still this isn't a problem). This directory is passed to yum using the ~~~--installroot=~~~ option. +Updates (`*.rpm` files) are checked and downloaded by UpdateVM, which by default is the same as the firewall VM, but can be configured to be any other, network-connected VM. This is done by [qubes-download-dom0-updates.sh script](https://github.com/QubesOS/qubes-core-agent-linux/blob/release2/misc/qubes-download-dom0-updates.sh) (this script is executed using qrexec by the previously mentioned qubes-dom0-update). Note that we assume that this script might get compromised and fetch maliciously compromised downloads -- this is not a problem as Dom0 verifies digital signatures on updates later. The downloaded rpm files are placed in a `/var/lib/qubes/dom0-updates` directory on UpdateVM filesystem (again, they might get compromised while being kept there, still this isn't a problem). This directory is passed to yum using the `--installroot=` option. Once updates are downloaded, the update script that runs in UpdateVM requests an RPM service [qubes.ReceiveUpdates](https://github.com/QubesOS/qubes-core-admin-linux/blob/release2/dom0-updates/qubes.ReceiveUpdates) to be executed in Dom0. This service is implemented by [qubes-receive-updates script](https://github.com/QubesOS/qubes-core-admin-linux/blob/release2/dom0-updates/qubes-receive-updates) running in Dom0. The Dom0's qubes-dom0-update script (which originally initiated the whole update process) waits until qubes-receive-updates finished. diff --git a/developer/services/qfilecopy.md b/developer/services/qfilecopy.md index 2f3fc0a6f..dea4ef513 100644 --- a/developer/services/qfilecopy.md +++ b/developer/services/qfilecopy.md @@ -24,6 +24,6 @@ This has the following disadvantages: In modern Qubes OS releases, we have reimplemented interVM file copy using qrexec, which addresses the above mentioned disadvantages. Nowadays, even more generic solution (qubes rpc) is used. See the developer docs on qrexec and qubes rpc. In a nutshell, the file sender and the file receiver just read/write from stdin/stdout, and the qubes rpc layer passes data properly - so, no block devices are used. -The rpc action for regular file copy is *qubes.Filecopy*, the rpc client is named *qfile-agent*, the rpc server is named *qfile-unpacker*. For DispVM copy, the rpc action is *qubes.OpenInVM*, the rpc client is named *qopen-in-vm*, rpc server is named *vm-file-editor*. Note that the qubes.OpenInVM action can be done on a normal AppVM, too. +The rpc action for regular file copy is *qubes.Filecopy*, the rpc client is named *qfile-agent*, the rpc server is named *qfile-unpacker*. For DispVM copy, the rpc action is *qubes.OpenInVM*, the rpc client is named *qopen-in-vm*, rpc server is named *vm-file-editor*. Note that the *qubes.OpenInVM* action can be done on a normal AppVM, too. Being a rpc server, *qfile-unpacker* must be coded securely, as it processes potentially untrusted data format. Particularly, we do not want to use external tar or cpio and be prone to all vulnerabilities in them; we want a simplified, small utility, that handles only directory/file/symlink file type, permissions, mtime/atime, and assume user/user ownership. In the current implementation, the code that actually parses the data from srcVM has ca 100 lines of code and executes chrooted to the destination directory. The latter is hardcoded to `~user/QubesIncoming/srcVM`; because of chroot, there is no possibility to alter files outside of this directory. diff --git a/developer/services/qmemman.md b/developer/services/qmemman.md index 03495c257..f1f4d0d21 100644 --- a/developer/services/qmemman.md +++ b/developer/services/qmemman.md @@ -15,7 +15,7 @@ Rationale Traditionally, Xen VMs are assigned a fixed amount of memory. It is not the optimal solution, as some VMs may require more memory than assigned initially, while others underutilize memory. Thus, there is a need for solution capable of shifting free memory from VM to another VM. -The [tmem](https://oss.oracle.com/projects/tmem/) project provides a "pseudo-RAM" that is assigned on per-need basis. However this solution has some disadvantages: +The [tmem](https://web.archive.org/web/20210712161104/https://oss.oracle.com/projects/tmem/) project provides a "pseudo-RAM" that is assigned on per-need basis. However this solution has some disadvantages: - It does not provide real RAM, just an interface to copy memory to/from fast, RAM-based storage. It is perfect for swap, good for file cache, but not ideal for many tasks. - It is deeply integrated with the Linux kernel. When Qubes will support Windows guests natively, we would have to port *tmem* to Windows, which may be challenging. @@ -24,13 +24,19 @@ Therefore, in Qubes another solution is used. There is the *qmemman* dom0 daemon Similarly, when there is need for Xen free memory (for instance, in order to create a new VM), traditionally the memory is obtained from dom0 only. When *qmemman* is running, it offers an interface to obtain memory from all domains. -To sum up, *qmemman* pros and cons. Pros: +To sum up, *qmemman* pros and cons. + +
+ Pros +
- provides automatic balancing of memory across participating PV and HVM domains, based on their memory demand - works well in practice, with less than 1% CPU consumption in the idle case - simple, concise implementation -Cons: +
+ Cons +
- the algorithm to calculate the memory requirement for a domain is necessarily simple, and may not closely reflect reality - *qmemman* is notified by a VM about memory usage change not more often than 10 times per second (to limit CPU overhead in VM). Thus, there can be up to 0.1s delay until qmemman starts to react to the new memory requirements diff --git a/developer/services/qrexec-internals.md b/developer/services/qrexec-internals.md index 7e05d2782..1888e0e6c 100644 --- a/developer/services/qrexec-internals.md +++ b/developer/services/qrexec-internals.md @@ -122,25 +122,25 @@ Details of all possible use cases and the messages involved are described below. qrexec-client -d domX [-l local_program] user:cmd - (If `local_program` is set, `qrexec-client` executes it and uses that child's stdin/stdout in place of its own when exchanging data with `qrexec-agent` later.) + - (If `local_program` is set, `qrexec-client` executes it and uses that child's stdin/stdout in place of its own when exchanging data with `qrexec-agent` later.) - `qrexec-client` translates that request into a `MSG_EXEC_CMDLINE` message sent to `qrexec-daemon`, with `connect_domain` set to 0 (connect to **dom0**) and `connect_port also set to 0 (allocate a port). + - `qrexec-client` translates that request into a `MSG_EXEC_CMDLINE` message sent to `qrexec-daemon`, with `connect_domain` set to 0 (connect to **dom0**) and `connect_port` also set to 0 (allocate a port). - **dom0**: `qrexec-daemon` allocates a free port (in this case 513), and sends a `MSG_EXEC_CMDLINE` back to the client with connection parameters (**domX** and 513) and with command field empty. - `qrexec-client` disconnects from the daemon, starts a vchan server on port 513 and awaits connection. + - `qrexec-client` disconnects from the daemon, starts a vchan server on port 513 and awaits connection. - Then, `qrexec-daemon` passes on the request as `MSG_EXEC_CMDLINE` message to the `qrexec-agent` running in **domX**. In this case, the connection parameters are **dom0** and 513. + - Then, `qrexec-daemon` passes on the request as `MSG_EXEC_CMDLINE` message to the `qrexec-agent` running in **domX**. In this case, the connection parameters are **dom0** and 513. - **domX**: `qrexec-agent` receives `MSG_EXEC_CMDLINE`, and starts the command (`user:cmd`, or `cmd` as user `user`). If possible, this is actually delegated to a separate server (`qrexec-fork-server`) also running on domX. - After starting the command, `qrexec-fork-server` connects to `qrexec-client` in **dom0** over the provided vchan port 513. + - After starting the command, `qrexec-fork-server` connects to `qrexec-client` in **dom0** over the provided vchan port 513. - Data is forwarded between the `qrexec-client` in **dom0** and the command executed in **domX** using `MSG_DATA_STDIN`, `MSG_DATA_STDOUT` and `MSG_DATA_STDERR`. - Empty messages (with data `len` field set to 0 in `msg_header`) are an EOF marker. Peer receiving such message should close the associated input/output pipe. + - Empty messages (with data `len` field set to 0 in `msg_header`) are an EOF marker. Peer receiving such message should close the associated input/output pipe. - When `cmd` terminates, **domX**'s `qrexec-fork-server` sends `MSG_DATA_EXIT_CODE` header to `qrexec-client` followed by the exit code (**int**). + - When `cmd` terminates, **domX**'s `qrexec-fork-server` sends `MSG_DATA_EXIT_CODE` header to `qrexec-client` followed by the exit code (**int**). ### domX: request execution of service `admin.Service` in dom0 @@ -150,41 +150,41 @@ Details of all possible use cases and the messages involved are described below. qrexec-client-vm dom0 admin.Service [local_program] [params] - (If `local_program` is set, it will be executed in **domX** and connected to the remote command's stdin/stdout). + - (If `local_program` is set, it will be executed in **domX** and connected to the remote command's stdin/stdout). - `qrexec-client-vm` connects to `qrexec-agent` and requests service execution (`admin.Service`) in **dom0**. + - `qrexec-client-vm` connects to `qrexec-agent` and requests service execution (`admin.Service`) in **dom0**. - `qrexec-agent` assigns an internal identifier to the request. It's based on a file descriptor of the connected `qrexec-client-vm`: in this case, `SOCKET11`. + - `qrexec-agent` assigns an internal identifier to the request. It's based on a file descriptor of the connected `qrexec-client-vm`: in this case, `SOCKET11`. - `qrexec-agent` forwards the request (`MSG_TRIGGER_SERVICE3`) to its corresponding `qrexec-daemon` running in dom0. + - `qrexec-agent` forwards the request (`MSG_TRIGGER_SERVICE3`) to its corresponding `qrexec-daemon` running in dom0. - **dom0**: `qrexec-daemon` receives the request and triggers `qrexec-policy` program, passing all necessary parameters: source domain **domX**, target domain **dom0**, service `admin.Service` and identifier `SOCKET11`. - `qrexec-policy` evaluates if the RPC should be allowed or denied, possibly also launching a GUI confirmation prompt. + - `qrexec-policy` evaluates if the RPC should be allowed or denied, possibly also launching a GUI confirmation prompt. - (If the RPC is denied, it returns with exit code 1, in which case `qrexec-daemon` sends a `MSG_SERVICE_REFUSED` back). + - (If the RPC is denied, it returns with exit code 1, in which case `qrexec-daemon` sends a `MSG_SERVICE_REFUSED` back). - **dom0**: If the RPC is allowed, `qrexec-policy` will launch a `qrexec-client` with the right command: qrexec-client -d dom0 -c domX,X,SOCKET11 "QUBESRPC admin.Service domX name dom0" - The `-c domX,X,SOCKET11` are parameters indicating how connect back to **domX** and pass its input/output. + - The `-c domX,X,SOCKET11` are parameters indicating how connect back to **domX** and pass its input/output. - The command parameter describes the RPC call: it contains service name (`admin.Service`), source domain (`domX`) and target description (`name dom0`, could also be e.g. `keyword @dispvm`). The target description is important in case the original target wasn't dom0, but the service is executing in dom0. + - The command parameter describes the RPC call: it contains service name (`admin.Service`), source domain (`domX`) and target description (`name dom0`, could also be e.g. `keyword @dispvm`). The target description is important in case the original target wasn't dom0, but the service is executing in dom0. - `qrexec-client` connects to a `qrexec-daemon` for **domX** and sends a `MSG_SERVICE_CONNECT` with connection parameters (**dom0**, and port 0, indicating a port should be allocated) and request identifier (`SOCKET11`). + - `qrexec-client` connects to a `qrexec-daemon` for **domX** and sends a `MSG_SERVICE_CONNECT` with connection parameters (**dom0**, and port 0, indicating a port should be allocated) and request identifier (`SOCKET11`). - `qrexec-daemon` allocates a free port (513) and sends back connection parameters to `qrexec-client` (**domX** port 513). + - `qrexec-daemon` allocates a free port (513) and sends back connection parameters to `qrexec-client` (**domX** port 513). - `qrexec-client` starts the command, and tries to connect to **domX** over the provided port 513. + - `qrexec-client` starts the command, and tries to connect to **domX** over the provided port 513. - Then, `qrexec-daemon` forwards the connection request (`MSG_SERVICE_CONNECT`) to `qrexec-agent` running in **domX**, with the right parameters (**dom0** port 513, request `SOCKET11`). + - Then, `qrexec-daemon` forwards the connection request (`MSG_SERVICE_CONNECT`) to `qrexec-agent` running in **domX**, with the right parameters (**dom0** port 513, request `SOCKET11`). - **dom0**: Because the command has the form `QUBESRPC: ...`, it is started through the `qubes-rpc-multiplexer` program with the provided parameters (`admin.Service domX name dom0`). That program finds and executes the necessary script in `/etc/qubes-rpc/`. - **domX**: `qrexec-agent` receives the `MSG_SERVICE_CONNECT` and passes the connection parameters back to the connected `qrexec-client-vm`. It identifies the `qrexec-client-vm` by the request identifier (`SOCKET11` means file descriptor 11). - `qrexec-client-vm` starts a vchan server on 513 and receives a connection from `qrexec-client`. + - `qrexec-client-vm` starts a vchan server on 513 and receives a connection from `qrexec-client`. - Data is forwarded between **dom0** and **domX** as in the previous example (dom0-VM). @@ -196,37 +196,37 @@ Details of all possible use cases and the messages involved are described below. qrexec-client-vm domY qubes.Service [local_program] [params] - (If `local_program` is set, it will be executed in **domX** and connected to the remote command's stdin/stdout). + - (If `local_program` is set, it will be executed in **domX** and connected to the remote command's stdin/stdout). - The request is forwarded as `MSG_TRIGGER_SERVICE3` to `qrexec-daemon` running in **dom0**, then to `qrexec-policy`, then (if allowed) to `qrexec-client`. - This is the same as in the previous example (VM-dom0). + - This is the same as in the previous example (VM-dom0). - **dom0**: If the RPC is allowed, `qrexec-policy` will launch a `qrexec-client` with the right command: qrexec-client -d domY -c domX,X,SOCKET11 user:cmd "DEFAULT:QUBESRPC qubes.Service domX" - The `-c domX,X,SOCKET11` are parameters indicating how connect back to **domX** and pass its input/output. + - The `-c domX,X,SOCKET11` are parameters indicating how connect back to **domX** and pass its input/output. - The command parameter describes the service call: it contains the username (or `DEFAULT`), service name (`qubes.Service`) and source domain (`domX`). + - The command parameter describes the service call: it contains the username (or `DEFAULT`), service name (`qubes.Service`) and source domain (`domX`). - `qrexec-client` will then send a `MSG_EXEC_CMDLINE` message to `qrexec-daemon` for **domY**. The message will be with port number 0, requesting port allocation. + - `qrexec-client` will then send a `MSG_EXEC_CMDLINE` message to `qrexec-daemon` for **domY**. The message will be with port number 0, requesting port allocation. - `qrexec-daemon` for **domY** will allocate a port (513) and send it back. It will also send a `MSG_EXEC_CMDLINE` to its corresponding agent. (It will also translate `DEFAULT` to the configured default username). + - `qrexec-daemon` for **domY** will allocate a port (513) and send it back. It will also send a `MSG_EXEC_CMDLINE` to its corresponding agent. (It will also translate `DEFAULT` to the configured default username). - Then, `qrexec-client` will also send `MSG_SERVICE_CONNECT` message to **domX**'s agent, indicating that it should connect to **domY** over port 513. + - Then, `qrexec-client` will also send `MSG_SERVICE_CONNECT` message to **domX**'s agent, indicating that it should connect to **domY** over port 513. - Having notified both domains about a connection, `qrexec-client` now exits. + - Having notified both domains about a connection, `qrexec-client` now exits. - **domX**: `qrexec-agent` receives a `MSG_SERVICE_CONNECT` with connection parameters (**domY** port 513) and request identifier (`SOCKET11`). It sends the connection parameters back to the right `qrexec-client-vm`. - `qrexec-client-vm` starts a vchan server on port 513. note that this is different than in the other examples: `MSG_SERVICE_CONNECT` means you should start a server, `MSG_EXEC_CMDLINE` means you should start a client. + - `qrexec-client-vm` starts a vchan server on port 513. note that this is different than in the other examples: `MSG_SERVICE_CONNECT` means you should start a server, `MSG_EXEC_CMDLINE` means you should start a client. - **domY**: `qrexec-agent` receives a `MSG_EXEC_CMDLINE` with the command to execute (`user:QUBESRPC...`) and connection parameters (**domX** port 513). - It forwards the request to `qrexec-fork-server`, which handles the command and connects to **domX** over the provided port. + - It forwards the request to `qrexec-fork-server`, which handles the command and connects to **domX** over the provided port. - Because the command is of the form `QUBESRPC ...`, `qrexec-fork-server` starts it using `qubes-rpc-multiplexer` program, which finds and executes the necessary script in `/etc/qubes-rpc/`. + - Because the command is of the form `QUBESRPC ...`, `qrexec-fork-server` starts it using `qubes-rpc-multiplexer` program, which finds and executes the necessary script in `/etc/qubes-rpc/`. - After that, the data is passed between **domX** and **domY** as in the previous examples (dom0-VM, VM-dom0). diff --git a/developer/services/qrexec-socket-services.md b/developer/services/qrexec-socket-services.md index 1dfd9d2d7..677cd2c9b 100644 --- a/developer/services/qrexec-socket-services.md +++ b/developer/services/qrexec-socket-services.md @@ -62,6 +62,7 @@ See the below example. `qrexec-policy-agent` is the program that handles "ask" prompts for Qubes RPC calls. It is a good example of an application that: + * Uses Python and asyncio. * Runs as a daemon, to save some overhead on starting process. * Runs as a normal user. diff --git a/developer/services/qrexec.md b/developer/services/qrexec.md index ded9fa970..f0620f2ac 100644 --- a/developer/services/qrexec.md +++ b/developer/services/qrexec.md @@ -250,7 +250,6 @@ This means it is possible to install a different script for a particular service See [below](#rpc-service-with-argument-file-reader) for an example of an RPC service using an argument. - ## Qubes RPC examples diff --git a/developer/services/qrexec2.md b/developer/services/qrexec2.md index 33d0f2193..8bbffb341 100644 --- a/developer/services/qrexec2.md +++ b/developer/services/qrexec2.md @@ -17,7 +17,7 @@ Qubes **qrexec** is a framework for implementing inter-VM (incl. Dom0-VM) services. It offers a mechanism to start programs in VMs, redirect their stdin/stdout, and a policy framework to control this all. -## Qrexec basics ## +## Qrexec basics During each domain creation a process named `qrexec-daemon` is started in dom0, and a process named `qrexec-agent` is started in the VM. They are @@ -56,7 +56,7 @@ There is a similar command line utility available inside Linux AppVMs (note the `-vm` suffix): `qrexec-client-vm` that will be described in subsequent sections. -## Qubes RPC services ## +## Qubes RPC services Apart from simple Dom0-\>VM command executions, as discussed above, it is also useful to have more advanced infrastructure for controlled inter-VM @@ -90,7 +90,7 @@ themselves. Qrexec framework is careful about connecting the stdin/stdout of the server process with the corresponding stdin/stdout of the requesting process in the requesting VM (see example Hello World service described below). -## Qubes RPC administration ## +## Qubes RPC administration Besides each VM needing to provide explicit programs to serve each supported service, the inter-VM service RPC is also governed by a central policy in Dom0. @@ -135,7 +135,7 @@ if still there is no policy file after prompting, the action is denied. On the target VM, the `/etc/qubes-rpc/XYZ` must exist, containing the file name of the program that will be invoked. -### Requesting VM-VM (and VM-Dom0) services execution ### +### Requesting VM-VM (and VM-Dom0) services execution In a src VM, one should invoke the qrexec client via the following command: @@ -161,7 +161,7 @@ If requesting VM-VM (and VM-Dom0) services execution *without cmdline helper*, connect directly to `/var/run/qubes/qrexec-agent-fdpass` socket as described [below](#all-the-pieces-together-at-work). -### Revoking "Yes to All" authorization ### +### Revoking "Yes to All" authorization Qubes RPC policy supports an "ask" action, that will prompt the user whether a given RPC call should be allowed. It is set as default for services such @@ -184,7 +184,7 @@ A user might also want to set their own policies in this section. This may mostly serve to prevent the user from mistakenly copying files or text from a trusted to untrusted domain, or vice-versa. -### Qubes RPC "Hello World" service ### +### Qubes RPC "Hello World" service We will show the necessary files to create a simple RPC call that adds two integers on the target VM and returns back the result to the invoking VM. @@ -232,7 +232,7 @@ be allowed. **Note:** For a real world example of writing a qrexec service, see this [blog post](https://blog.invisiblethings.org/2013/02/21/converting-untrusted-pdfs-into-trusted.html). -### More high-level RPCs? ### +### More high-level RPCs? As previously noted, Qubes aims to provide mechanisms that are very simple and thus with very small attack surface. This is the reason why the inter-VM @@ -242,14 +242,14 @@ users/app developers are always free to run more high-level RPC protocols on top of qrexec. Care should be taken, however, to consider potential attack surfaces that are exposed to untrusted or less trusted VMs in that case. -# Qubes RPC internals # +## Qubes RPC internals (*This is about the implementation of qrexec v2. For the implementation of qrexec v3, see [here](/doc/qrexec-internals/). Note that the user API in v3 is backward compatible: qrexec apps written for Qubes R2 should run without modification on Qubes R3.*) -## Dom0 tools implementation ## +## Dom0 tools implementation Players: @@ -262,7 +262,7 @@ Players: **Note:** None of the above tools are designed to be used by users. -## Linux VMs implementation ## +## Linux VMs implementation Players: @@ -275,7 +275,7 @@ Players: **Note:** None of the above tools are designed to be used by users. `qrexec-client-vm` is designed to be wrapped up by Qubes apps. -## Windows VMs implementation ## +## Windows VMs implementation `%QUBES_DIR%` is the installation path (`c:\Program Files\Invisible Things Lab\Qubes OS Windows Tools` by default). @@ -291,7 +291,7 @@ Lab\Qubes OS Windows Tools` by default). **Note:** None of the above tools are designed to be used by users. `qrexec-client-vm` is designed to be wrapped up by Qubes apps. -## All the pieces together at work ## +## All the pieces together at work **Note:** This section is not needed to use qrexec for writing Qubes apps. Also note the [qrexec framework implemention in Qubes R3](/doc/qrexec3/) diff --git a/developer/system/gui.md b/developer/system/gui.md index 6de31dc7d..3eecccf53 100644 --- a/developer/system/gui.md +++ b/developer/system/gui.md @@ -16,8 +16,8 @@ title: GUI virtualization All AppVM X applications connect to local (running in AppVM) Xorg servers that use the following "hardware" drivers: -- *`dummyqsb_drv`* - video driver, that paints onto a framebuffer located in RAM, not connected to real hardware -- *`qubes_drv`* - it provides a virtual keyboard and mouse (in fact, more, see below) +- `dummyqsb_drv` - video driver, that paints onto a framebuffer located in RAM, not connected to real hardware +- `qubes_drv` - it provides a virtual keyboard and mouse (in fact, more, see below) For each AppVM, there is a pair of `qubes-gui` (running in AppVM) and `qubes-guid` (running in the AppVM’s GuiVM, dom0 by default) processes connected over vchan. The main responsibilities of `qubes-gui` are: @@ -119,7 +119,7 @@ AppVM -> GuiVM messages Proper handling of the below messages is security-critical. Note that all messages except for `CLIPBOARD`, `MFNDUMP`, and `WINDOW_DUMP` have fixed size, so the parsing code can be small. -The `override_redirect` window attribute is explained at [Override Redirect Flag](https://tronche.com/gui/x/xlib/window/attributes/override-redirect.html). The `transient_for` attribute is explained at [`transient_for` attribute](https://tronche.com/gui/x/icccm/sec-4.html#WM_TRANSIENT_FOR). +The `override_redirect` window attribute is explained at [Override Redirect Flag](https://tronche.com/gui/x/xlib/window/attributes/override-redirect.html). The `transient_for` attribute is explained at `transient_for` [attribute](https://tronche.com/gui/x/icccm/sec-4.html#WM_TRANSIENT_FOR). Window manager hints and flags are described in the [Extended Window Manager Hints (EWMH) spec](https://standards.freedesktop.org/wm-spec/latest/), especially under the `_NET_WM_STATE` section. diff --git a/developer/system/template-implementation.md b/developer/system/template-implementation.md index 1bd4dd2b4..6f08977a1 100644 --- a/developer/system/template-implementation.md +++ b/developer/system/template-implementation.md @@ -27,7 +27,7 @@ This is mounted as /rw and here is placed all VM private data. This includes: - */usr/local* – which is symlink to /rw/usrlocal - some config files (/rw/config) called by qubes core scripts (ex /rw/config/rc.local) -**Note:** Whenever a TemplateBasedVM is created, the contents of the `/home` directory of its parent TemplateVM [are *not* copied to the child TemplateBasedVM's `/home`](/doc/templates/#inheritance-and-persistence). The child TemplateBasedVM's `/home` is independent from its parent TemplateVM's `/home`, which means that any changes to the parent TemplateVM's `/home` will not affect the child TemplateBasedVM's `/home`. Once a TemplateBasedVM has been created, any changes in its `/home`, `/usr/local`, or `/rw/config` directories will be persistent across reboots, which means that any files stored there will still be available after restarting the TemplateBasedVM. No changes in any other directories in TemplateBasedVMs persist in this manner. If you would like to make changes in other directories which *do* persist in this manner, you must make those changes in the parent TemplateVM. +**Note:** Whenever a TemplateBasedVM is created, the contents of the `/home` directory of its parent TemplateVM are *not* copied to the [child TemplateBasedVM's](/doc/templates/#inheritance-and-persistence) `/home`. The child TemplateBasedVM's `/home` is independent from its parent TemplateVM's `/home`, which means that any changes to the parent TemplateVM's `/home` will not affect the child TemplateBasedVM's `/home`. Once a TemplateBasedVM has been created, any changes in its `/home`, `/usr/local`, or `/rw/config` directories will be persistent across reboots, which means that any files stored there will still be available after restarting the TemplateBasedVM. No changes in any other directories in TemplateBasedVMs persist in this manner. If you would like to make changes in other directories which *do* persist in this manner, you must make those changes in the parent TemplateVM. ### modules.img (xvdd) diff --git a/developer/system/template-manager.md b/developer/system/template-manager.md index 82d57d53e..6a3912e29 100644 --- a/developer/system/template-manager.md +++ b/developer/system/template-manager.md @@ -23,7 +23,7 @@ specifically override the changes.) Other operations that work well on normal VMs are also somewhat inconsistent on RPM-managed templates. This includes actions such as renaming ([#839](https://github.com/QubesOS/qubes-issues/issues/839)), removal -([#5509](https://github.com/QubesOS/qubes-issues/issues/5509)) and +([#5509](https://web.archive.org/web/20210526123932/https://github.com/QubesOS/qubes-issues/issues/5509)) and backup/restore ([#1385](https://github.com/QubesOS/qubes-issues/issues/1385), [#1453](https://github.com/QubesOS/qubes-issues/issues/1453), [discussion thread diff --git a/introduction/faq.md b/introduction/faq.md index f869f760d..49ddeeb19 100644 --- a/introduction/faq.md +++ b/introduction/faq.md @@ -144,7 +144,7 @@ Briefly, here are some of the main pros and cons of this approach relative to Qu (For example, you might find it natural to lock your secure laptop in a safe when you take your unsecure laptop out with you.)
- Cons + Cons
- Physical separation can be cumbersome and expensive, since we may have to obtain and set up a separate physical machine for each security level we need. @@ -190,7 +190,7 @@ By default, Qubes OS uses [LUKS](https://en.wikipedia.org/wiki/Linux_Unified_Key ### What do all these terms mean? -All Qubes-specific terms are defined in the [glossary](/doc/glossary/) +All Qubes-specific terms are defined in the [glossary](/doc/glossary/). ### Does Qubes run every app in a separate VM? diff --git a/introduction/getting-started.md b/introduction/getting-started.md index 3f02e0fa3..3c98450ea 100644 --- a/introduction/getting-started.md +++ b/introduction/getting-started.md @@ -149,7 +149,7 @@ The default terminal emulator in Qubes is Xfce Terminal. Opening a terminal emulator in dom0 can be done in several ways: - Go to the App Menu, click on the Settings icon, choose Other from the drop-down menu, and select **Xfce Terminal Emulator** at the bottom. -- Press `Alt`+`F3` and search for `xfce terminal`. +- Press `Alt` + `F3` and search for `xfce terminal`. - Right-click on the desktop and select **Open Terminal Here**. Various command-line tools are described as part of this guide, and the whole reference can be found [here](/doc/tools/). diff --git a/introduction/screenshots.md b/introduction/screenshots.md index 1f4b04d0f..36aea7f37 100644 --- a/introduction/screenshots.md +++ b/introduction/screenshots.md @@ -61,6 +61,7 @@ Qubes is all about seamless integration from the user’s point of view. Here yo [![r4.0-manager-and-sysnet-network-prompt.png](/attachment/doc/r4.0-manager-and-sysnet-network-prompt.png)](/attachment/doc/r4.0-manager-and-sysnet-network-prompt.png) All the networking runs in a special, unprivileged NetVM. (Notice the red frame around the Network Manager dialog box on the screen above.) This means that in the event that your network card driver, Wi-Fi stack, or DHCP client is compromised, the integrity of the rest of the system will not be affected! This feature requires Intel VT-d or AMD IOMMU hardware (e.g., Core i5/i7 systems) + * * * * * [![r4.0-software-update.png](/attachment/doc/r4.0-software-update.png)](/attachment/doc/r4.0-software-update.png) diff --git a/introduction/support.md b/introduction/support.md index e7e9d71aa..b41765dad 100644 --- a/introduction/support.md +++ b/introduction/support.md @@ -499,10 +499,11 @@ too](https://forum.qubes-os.org/t/using-the-forum-via-email/533)!) The Qubes OS Project has a presence on the following social media platforms: -- Twitter -- Mastodon -- Reddit -- LinkedIn +- [Twitter](https://twitter.com/QubesOS) +- [Mastodon](https://mastodon.social/@QubesOS) +- [Reddit](https://www.reddit.com/r/Qubes/) +- [Facebook](https://www.facebook.com/QubesOS/) +- [LinkedIn](https://www.linkedin.com/company/qubes-os/) Generally speaking, these are not intended to be primary support venues. (Those would be [qubes-users](#qubes-users) and the [forum](#forum).) Rather, these @@ -514,6 +515,7 @@ news. ## Chat If you'd like to chat, join us on + - the `#qubes` channel on `irc.libera.chat` or - the `#qubes:invisiblethingslab.com` matrix channel. diff --git a/project-security/verifying-signatures.md b/project-security/verifying-signatures.md index 54e230e83..2a483f01c 100644 --- a/project-security/verifying-signatures.md +++ b/project-security/verifying-signatures.md @@ -833,6 +833,7 @@ the arguments to `gpg2`. (The signature file goes first.) ### Why am I getting "WARNING: This key is not certified with a trusted signature! There is no indication that the signature belongs to the owner."? There are several possibilities: + - You don't have the [Qubes Master Signing Key](#how-to-import-and-authenticate-the-qubes-master-signing-key). - You have not [set the Qubes Master Signing Key's trust level diff --git a/user/advanced-topics/bind-dirs.md b/user/advanced-topics/bind-dirs.md index fe6923ab8..07c470b4a 100644 --- a/user/advanced-topics/bind-dirs.md +++ b/user/advanced-topics/bind-dirs.md @@ -8,21 +8,20 @@ ref: 186 title: How to make any file persistent (bind-dirs) --- -## What are bind-dirs? ## +## What are bind-dirs? With [bind-dirs](https://github.com/QubesOS/qubes-core-agent-linux/blob/master/vm-systemd/bind-dirs.sh) any arbitrary files or folders can be made persistent in app qubes. -## What is it useful for? ## +## What is it useful for? In an app qube all of the file system comes from the template except `/home`, `/usr/local`, and `/rw`. This means that changes in the rest of the filesystem are lost when the app qube is shutdown. bind-dirs provides a mechanism whereby files usually taken from the template can be persisted across reboots. -For example, in Whonix, [Tor's data dir `/var/lib/tor` has been made persistent in the TemplateBased ProxyVM sys-whonix](https://github.com/Whonix/qubes-whonix/blob/8438d13d75822e9ea800b9eb6024063f476636ff/usr/lib/qubes-bind-dirs.d/40_qubes-whonix.conf#L5) -In this way sys-whonix can benefit from the Tor anonymity feature 'persistent Tor entry guards' but does not have to be a standalone. +For example, in Whonix, Tor's data dir `/var/lib/tor` [has been made persistent in the TemplateBased ProxyVM sys-whonix](https://github.com/Whonix/qubes-whonix/blob/8438d13d75822e9ea800b9eb6024063f476636ff/usr/lib/qubes-bind-dirs.d/40_qubes-whonix.conf#L5). In this way sys-whonix can benefit from the Tor anonymity feature 'persistent Tor entry guards' but does not have to be a standalone. -## How to use bind-dirs.sh? ## +## How to use bind-dirs.sh? In this example, we want to make `/var/lib/tor` persistent. Enter all of the following commands in your app qube. @@ -68,13 +67,13 @@ binds+=( '/var/lib/tor' ) binds+=( '/etc/tor/torrc' ) ``` -## Other Configuration Folders ## +## Other Configuration Folders * `/usr/lib/qubes-bind-dirs.d` (lowest priority, for packages) * `/etc/qubes-bind-dirs.d` (intermediate priority, for template wide configuration) * `/rw/config/qubes-bind-dirs.d` (highest priority, for per VM configuration) -## How does it work? ## +## How does it work? bind-dirs.sh is called at startup of an app qube, and configuration files in the above configuration folders are parsed to build a bash array. Files or folders identified in the array are copied to `/rw/bind-dirs` if they do not already exist there, and are then bind mounted over the original files/folders. @@ -84,7 +83,7 @@ Creation of the files and folders in `/rw/bind-dirs` should be automatic the fir If you want to circumvent this process, you can create the relevant file structure under `/rw/bind-dirs` and make any changes at the same time that you perform the configuration, before reboot. Note that you must create the full folder structure under `/rw/bind-dirs` - e.g you would have to create `/rw/bind-dirs/var/lib/tor` -## Limitations ## +## Limitations * Files that exist in the template root image cannot be deleted in the app qubes root image using bind-dirs.sh. * Re-running `sudo /usr/lib/qubes/init/bind-dirs.sh` without a previous `sudo /usr/lib/qubes/init/bind-dirs.sh umount` does not work. @@ -93,9 +92,9 @@ Note that you must create the full folder structure under `/rw/bind-dirs` - e.g Any changes you make will not survive a reboot. If you think it likely you will want to edit a file, then either include the parent directory in bind-dirs rather than the file, or perform the file operation on the file in `/rw/bind-dirs`. * Some files are altered when a qube boots - e.g. `/etc/hosts`. If you try to use bind-dirs on such files you may break your qube in unpredictable ways. -You can add persistent rules to `/etc/hosts` using [`/rw/config/rc.local`](/doc/config-files) +You can add persistent rules to `/etc/hosts` using [`/rw/config/rc.local`](/doc/config-files). -## How to remove binds from bind-dirs.sh? ## +## How to remove binds from bind-dirs.sh? `binds` is actually just a bash variable (an array) and the bind-dirs.sh configuration folders are sourced as bash snippets in lexical order. Therefore if you wanted to remove an existing entry from the `binds` array, you could do that by using a lexically higher configuration file. @@ -111,7 +110,8 @@ binds=( "${binds[@]/'/var/lib/tor'}" ) ## Custom persist feature ## -Custom persist is an optional advanced feature allowing the creation of minimal state AppVM. The purpose of such an AppVM is to avoid unwanted data to persist as much as possible by the disabling the ability to configure persistence from the VM itself. When enabled, the following happens: +Custom persist is an optional advanced feature allowing the creation of minimal state AppVM. The purpose of such an AppVM is to avoid unwanted data to persist as much as possible by disabling the ability to configure persistence from the VM itself. When enabled, the following happens: + * ``/rw/config/rc.local`` is no longer executed * ``/rw/config/qubes-firewall-user-script`` is ignored * ``/rw/config/suspend-module-blacklist`` is ignored diff --git a/user/advanced-topics/config-files.md b/user/advanced-topics/config-files.md index 5ca7b22ff..2a0e33c9b 100644 --- a/user/advanced-topics/config-files.md +++ b/user/advanced-topics/config-files.md @@ -115,8 +115,7 @@ VM: { Currently supported settings: - `allow_fullscreen` - allow VM to request its windows to go fullscreen (without any colorful frame). - - **Note:** Regardless of this setting, you can always put a window into fullscreen mode in Xfce4 using the trusted window manager by right-clicking on a window's title bar and selecting "Fullscreen". + - **Note:** Regardless of this setting, you can always put a window into fullscreen mode in Xfce4 using the trusted window manager by right-clicking on a window's title bar and selecting "Fullscreen". This functionality should still be considered safe, since a VM window still can't voluntarily enter fullscreen mode. The user must select this option from the trusted window manager in dom0. To exit fullscreen mode from here, press `alt` + `space` to bring up the title bar menu again, then select "Leave Fullscreen". diff --git a/user/advanced-topics/gui-configuration.md b/user/advanced-topics/gui-configuration.md index 2f02bbe86..533cb0846 100644 --- a/user/advanced-topics/gui-configuration.md +++ b/user/advanced-topics/gui-configuration.md @@ -21,7 +21,7 @@ qvm-features dom0 gui-videoram-min $(($WIDTH * $HEIGHT * 4 / 1024)) qvm-features dom0 gui-videoram-overhead 0 ``` -Where `$WIDTH`×`$HEIGHT` is the maximum desktop size that you anticipate needing. +Where `$WIDTH` × `$HEIGHT` is the maximum desktop size that you anticipate needing. For example, if you expect to use a 1080p display and a 4k display side-by-side, that is `(1920 + 3840) × 2160 × 4 / 1024 = 48600`, or slightly more than 48 MiB per qube. After making these adjustments, the qubes need to be restarted. @@ -31,6 +31,7 @@ qvm-features dom0 gui-videoram-min $(xrandr --verbose | grep "Screen 0" | sed -e ``` The amount of memory allocated per qube is the maximum of: + - `gui-videoram-min` - current display + `gui-videoram-overhead` @@ -41,8 +42,8 @@ You might face issues when playing video, if the video is choppy instead of smooth display this could be because the X server doesn't work. You can use the Linux terminal (Ctrl-Alt-F2) after starting the virtual machine, login. You can look at the Xorg logs file. As an option you can have the below config as -well present in `/etc/X11/xorg.conf.d/90-intel.conf`, depends on HD graphics -though - +well present in `/etc/X11/xorg.conf.d/90-intel.conf` (depends on HD graphics +though). ```bash Section "Device" diff --git a/user/advanced-topics/i3.md b/user/advanced-topics/i3.md index b326988d5..6df92a3ee 100644 --- a/user/advanced-topics/i3.md +++ b/user/advanced-topics/i3.md @@ -24,7 +24,7 @@ optionally in case you would prefer writing your own configuration (see That's it. After logging out, you can select i3 in the login manager. -### Customization +# Customization **Caution:** The following external resources may not have been reviewed by the Qubes team. @@ -34,13 +34,13 @@ That's it. After logging out, you can select i3 in the login manager. * [i3 config with dmenu-i3-window-jumper](https://github.com/anadahz/qubes-i3-config/blob/master/config) * [dmenu script to open a terminal in a chosen VM](https://gist.github.com/dmoerner/65528941dd20b05c98ee79e92d7e0183) -## Compilation and installation from source +# Compilation and installation from source Note that the compilation from source is done in a Fedora based domU (could be dispvm). The end result is always an `.rpm` that is copied to dom0 and then installed through the package manager. -### Getting the code +## Getting the code Clone the i3-qubes repository here: @@ -57,7 +57,7 @@ OS and changes some defaults so the user can't override decisions. If you want to make any changes to the package, this is the time and place to do it. -### Building +## Building You'll need to install the build dependencies, which are listed in build-deps.list. You can verify them and then install them with: @@ -76,7 +76,7 @@ $ make verify-sources $ make rpms ``` -### Installing +## Installing **Warning**: Manually installing software in dom0 is inherently risky, and the method described here circumvents the usual security mechanisms of qubes-dom0-update. diff --git a/user/advanced-topics/managing-vm-kernels.md b/user/advanced-topics/managing-vm-kernels.md index 1dab8a44a..729bd108d 100644 --- a/user/advanced-topics/managing-vm-kernels.md +++ b/user/advanced-topics/managing-vm-kernels.md @@ -60,7 +60,7 @@ nopat ## Installing different kernel using Qubes kernel package -VM kernels are packages by Qubes team in `kernel-qubes-vm` packages. +VM kernels are packaged by the Qubes team in the `kernel-qubes-vm` packages. Generally, the system will keep the three newest available versions. You can list them with the `rpm` command: @@ -152,8 +152,9 @@ The newly installed package is set as the default VM kernel. ## Installing different VM kernel based on dom0 kernel It is possible to package a kernel installed in dom0 as a VM kernel. -This makes it possible to use a VM kernel which is not packaged by Qubes team. +This makes it possible to use a VM kernel which is not packaged by the Qubes team. This includes: + * using a Fedora kernel package * using a manually compiled kernel @@ -297,7 +298,7 @@ Install distribution kernel image, kernel headers and the grub. sudo apt install linux-image-amd64 linux-headers-amd64 grub2 qubes-kernel-vm-support ~~~ -If you are doing that on a qube based on "Debian Minimal" template, a grub gui will popup during the installation, asking you where you want to install the grub loader. You must select /dev/xvda (check the box using the space bar, and validate your choice with "Enter".) If this popup does not appear during the installation, you must manually setup `grub2` by running: +If you are doing that on a qube based on "Debian Minimal" template, a grub gui will popup during the installation, asking you where you want to install the grub loader. You must select `/dev/xvda` (check the box using the space bar, and validate your choice with "Enter".) If this popup does not appear during the installation, you must manually setup `grub2` by running: ~~~ sudo grub-install /dev/xvda @@ -314,8 +315,8 @@ Go to dom0 -> Qubes VM Manger -> right click on the VM -> Qube settings -> Advan Depends on `Virtualization` mode setting: -* `Virtualization` mode `PV`: Possible, however use of `Virtualization` mode `PV` mode is discouraged for security purposes. - * If you require `Virtualization` mode `PV` mode, install `grub2-xen-pvh` in dom0. This can be done by running command `sudo qubes-dom0-update pvgrub2-pvh` in dom0. +* `Virtualization` mode `PV`: Possible, however use of `Virtualization` mode `PV` is discouraged for security purposes. + * If you require `Virtualization` mode `PV`, install `grub2-xen-pvh` in dom0. This can be done by running command `sudo qubes-dom0-update pvgrub2-pvh` in dom0. * `Virtualization` mode `PVH`: Possible. Install `grub2-xen-pvh` in dom0. * `Virtualization` mode `HVM`: Possible. diff --git a/user/advanced-topics/mount-from-other-os.md b/user/advanced-topics/mount-from-other-os.md index 864416661..a5e16cbfb 100644 --- a/user/advanced-topics/mount-from-other-os.md +++ b/user/advanced-topics/mount-from-other-os.md @@ -13,6 +13,7 @@ title: How to mount a Qubes partition from another OS When a Qubes OS install is unbootable or booting it is otherwise undesirable, this process allows for the recovery of files stored within the system. These functions are manual and do not require any Qubes specific tools. All steps assume the default Qubes install with the following components: + - LUKS encrypted disk - LVM based VM storage @@ -89,7 +90,7 @@ Reverting Changes ----------------------------------------- Any changes which were made to the system in the above steps will need to be reverted before the disk will properly boot. -However, LVM will not allow an VG to be renamed to a name already in use. +However, LVM will not allow a VG to be renamed to a name already in use. Thes steps must occur either in an app qube or using recovery media. 1. Unmount any disks that were accessed. diff --git a/user/advanced-topics/rpc-policy.md b/user/advanced-topics/rpc-policy.md index 53cf01e40..bd2aa168e 100644 --- a/user/advanced-topics/rpc-policy.md +++ b/user/advanced-topics/rpc-policy.md @@ -43,7 +43,7 @@ This is how we create a policy that says: "VMs tagged with 'work' are allowed to When an operation is initiated with a specific target, e.g. `qvm-copy-to-vm other_work_vm some_file` the policy mechanism looks for a row matching `source_work_vm other_work_vm PERMISSION`. In this case, assuming both VMs have the `work` tag, the second row would match, and -the operation would be `allow`ed without any prompts. When an operation is initiated without a specific target, e.g. `qvm-copy some_file`, +the operation would be `allow`-ed without any prompts. When an operation is initiated without a specific target, e.g. `qvm-copy some_file`, the policy mechanism looks for a row matching `source_work_vm @default PERMISSION`. In this case, the first row indicates that the user should be prompted for the destination. The list of destination VMs in the prompt is filtered to only include VMs that are valid as per the policy (so in this example, only other work VMs would be listed). If the first row was commented out, the second row would not match diff --git a/user/advanced-topics/salt.md b/user/advanced-topics/salt.md index 2aa2c1242..3e7b1bc37 100644 --- a/user/advanced-topics/salt.md +++ b/user/advanced-topics/salt.md @@ -67,9 +67,9 @@ The lowest level is a single state function, called like this `state.single pkg.installed name=firefox-esr` When the system compiles data from sls formulas, it generates *chunks* - low chunks are at the bottom of the compiler . You can call them with -`state.low` +`state.low`. Next up is the *lowstate* level - this is the list of all low chunks in -order. - To see them you have `state.show_lowstate`, and use `state.lowstate` to apply them. +order. To see them you have `state.show_lowstate`, and use `state.lowstate` to apply them. At the top level is *highstate* - this is an interpretation of **all** the data represented in YAML in sls files. You can view it with `state.show_highstate`. @@ -219,7 +219,7 @@ Instead, to get this behavior, you would use a `do` statement. So you should take a look at the [Jinja API documentation](https://jinja.palletsprojects.com/templates/). Documentation about using Jinja to directly call Salt functions and get data about your system can be found in the official -[Salt documentation](https://docs.saltproject.io/en/getstarted/config/jinja.html#get-data-using-salt). +[Salt documentation](https://docs.saltproject.io/salt/user-guide/en/latest/topics/jinja.html). ## Salt Configuration, QubesOS layout @@ -588,6 +588,7 @@ qube which provides network to the given qube The output for each qube is logged in `/var/log/qubes/mgmt-VM_NAME.log`. If the log does not contain useful information: + 1. Run `sudo qubesctl --skip-dom0 --target=VM_NAME state.apply` 2. When your qube is being started (yellow) press Ctrl-z on qubesctl. 3. Open terminal in disp-mgmt-qube_NAME. @@ -595,12 +596,12 @@ If the log does not contain useful information: executed in the management qube. 5. Get the last two lines: - ```shell_session - $ export PATH="/usr/lib/qubes-vm-connector/ssh-wrapper:$PATH" - $ salt-ssh "$target_vm" $salt_command - ``` - +```shell_session +$ export PATH="/usr/lib/qubes-vm-connector/ssh-wrapper:$PATH" +$ salt-ssh "$target_vm" $salt_command +``` Adjust $target_vm (VM_NAME) and $salt_command (state.apply). + 6. Execute them, fix problems, repeat. ## Known Pitfalls diff --git a/user/advanced-topics/secondary-storage.md b/user/advanced-topics/secondary-storage.md index af4b689cf..9554d91e0 100644 --- a/user/advanced-topics/secondary-storage.md +++ b/user/advanced-topics/secondary-storage.md @@ -29,6 +29,7 @@ You can query qvm-pool to list available storage drivers: qvm-pool --help-drivers ``` qvm-pool driver explanation: + ``` refers to using a simple file for image storage and lacks a few features. refers to storing images on a filesystem supporting copy on write. diff --git a/user/advanced-topics/standalones-and-hvms.md b/user/advanced-topics/standalones-and-hvms.md index e76d2f9b7..2ca852b2a 100644 --- a/user/advanced-topics/standalones-and-hvms.md +++ b/user/advanced-topics/standalones-and-hvms.md @@ -83,6 +83,7 @@ qvm-create --class StandaloneVM --label --property virt_mode=hvm -- ``` Notes: + - Technically, `virt_mode=hvm` is not necessary for every standalone. However, it is needed if you want to use a kernel from within the qube. - If you want to make software installed in a template available in your standalone, pass in the name of the template using the `--template` option. @@ -217,7 +218,7 @@ In this example, Network Manager on KDE, the network had the following values: 4. Gateway 10.138.24.248 5. Virtual DNS 10.139.1.1 and 10.139.1.2 -![Image of Network Manager, annotated by numbers for reference below](/attachment/doc/Network Manager.png "Annotated image of KDE Network Manager") +![Image of Network Manager, annotated by numbers for reference below](/attachment/doc/Network_Manager.png "Annotated image of KDE Network Manager") The network was set up by entering Network Manager, selecting the Wi-Fi & Networking tab, clicking on the Wired Ethernet item, and selecting tab IPv4 (1). @@ -247,7 +248,7 @@ as is, then any HVMs based on it will effectively be disposables. All file system changes will be wiped when the HVM is shut down. Please see [this -page](https://github.com/Qubes-Community/Contents/blob/master/docs/os/windows/windows-tools.md) +page](/doc/templates/windows/windows-qubes-4-1/#windows-as-a-template) for specific advice on installing and using Windows-based templates. ## Cloning HVMs @@ -411,9 +412,7 @@ device again. This is illustrated in the screenshot below: You can convert any VirtualBox VM to a Qubes HVM using this method. -For example, Microsoft provides [free 90-day evaluation VirtualBox VMs for -browser -testing](https://developer.microsoft.com/en-us/microsoft-edge/tools/vms/). +For example, Microsoft provides [virtual machines containing an evaluation version of Windows](https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/). About 60 GB of disk space is required for conversion. Use an external hard drive if needed. The final `root.img` size is 40 GB. @@ -492,5 +491,5 @@ qemu-img -h | tail -n1 Other documents related to HVMs: -- [Windows VMs](https://github.com/Qubes-Community/Contents/blob/master/docs/os/windows/windows-vm.md) +- [Windows VMs](https://forum.qubes-os.org/search?q=windows%20hvm%20%23guides) - [Linux HVM Tips](https://forum.qubes-os.org/t/19008) diff --git a/user/advanced-topics/volume-backup-revert.md b/user/advanced-topics/volume-backup-revert.md index f06f1ca88..4a39334a3 100644 --- a/user/advanced-topics/volume-backup-revert.md +++ b/user/advanced-topics/volume-backup-revert.md @@ -17,13 +17,13 @@ shutdown. (Note that this is a different, lower level activity than the In Qubes, when you create a new VM, it's volumes are stored in one of the system's [Storage Pools](/doc/storage-pools/). On pool creation, a -revisions_to_keep default value is set for the entire pool. (For a pool creation +`revisions_to_keep` default value is set for the entire pool. (For a pool creation example, see [Storing app qubes on Secondary Drives](/doc/secondary-storage/).) Thereafter, each volume associated with a VM that is stored in this pool -inherits the pool default revisions_to_keep. +inherits the pool default `revisions_to_keep`. -For the private volume associated with a VM named vmname, you may inspect the -value of revisions_to_keep from the dom0 CLI as follows: +For the private volume associated with a VM named *vmname*, you may inspect the +value of `revisions_to_keep` from the dom0 CLI as follows: ``` qvm-volume info vmname:private @@ -31,11 +31,11 @@ qvm-volume info vmname:private The output of the above command will also display the "Available revisions (for revert)" at the bottom. For a very large volume in a small pool, -revisions_to_keep should probably be set to the maximum value of 1 to minimize +`revisions_to_keep` should probably be set to the maximum value of 1 to minimize the possibility of the pool being accidentally filled up by snapshots. For a smaller volume for which you would like to have the future option of reverting, -revisions_to_keep should probably be set to at least 2. To set -revisions_to_keep for this same VM / volume example: +`revisions_to_keep` should probably be set to at least 2. To set +`revisions_to_keep` for this same VM / volume example: ``` qvm-volume config vmname:private revisions_to_keep 2 diff --git a/user/downloading-installing-upgrading/install-security.md b/user/downloading-installing-upgrading/install-security.md index 6a4920c21..cd94af58e 100644 --- a/user/downloading-installing-upgrading/install-security.md +++ b/user/downloading-installing-upgrading/install-security.md @@ -51,7 +51,7 @@ Cons: (If the drive is mounted to a compromised machine, the ISO could be maliciously altered after it has been written to the drive.) * Untrustworthy firmware. (Firmware can be malicious even if the drive is new. - Plugging a drive with rewritable firmware into a compromised machine can also [compromise the drive](https://srlabs.de/badusb/). + Plugging a drive with rewritable firmware into a compromised machine can also [compromise the drive](https://web.archive.org/web/20160304013434/https://srlabs.de/badusb/). Installing from a compromised drive could compromise even a brand new Qubes installation.) ### Optical discs diff --git a/user/downloading-installing-upgrading/installation-guide.md b/user/downloading-installing-upgrading/installation-guide.md index 2ff54aead..6f623b563 100644 --- a/user/downloading-installing-upgrading/installation-guide.md +++ b/user/downloading-installing-upgrading/installation-guide.md @@ -174,6 +174,8 @@ You can have as many keyboard layout and languages as you want. Post-install, yo Don't forget to select your time and date by clicking on the Time & Date entry. [![Time and date](/attachment/doc/time-and-date.png)](/attachment/doc/time-and-date.png) + + ### Installation destination Under the System section, you must choose the installation destination. Select the storage device on which you would like to install Qubes OS. diff --git a/user/downloading-installing-upgrading/upgrade/2.md b/user/downloading-installing-upgrading/upgrade/2.md index e46f637cb..43ebbd3ff 100644 --- a/user/downloading-installing-upgrading/upgrade/2.md +++ b/user/downloading-installing-upgrading/upgrade/2.md @@ -34,7 +34,7 @@ Note that dom0 in R2 is based on Fedora 20, in contrast to Fedora 18 in previous 1. Open terminal in Dom0. E.g. Start-\>System Settings-\>Konsole. -1. Install all the updates for Dom0: +2. Install all the updates for Dom0: ~~~ sudo qubes-dom0-update @@ -42,7 +42,7 @@ Note that dom0 in R2 is based on Fedora 20, in contrast to Fedora 18 in previous After this step you should have `qubes-release-2-5` in your Dom0. Important: if you happen to have `qubes-release-2-6*` then you should downgrade to `qubes-release-2-5`! The `qubes-release-2-6*` packages have been uploaded to the testing repos and were kept there for a few hours, until we realized they bring incorrect repo definitions and so we removed them and also have changed the update procedure a bit (simplifying it). -1. Upgrade dom0 to R2: +3. Upgrade dom0 to R2: Note: be sure that the VM used as a update-downloading-vm (by default its the firewallvm based on the default template) has been updated to the latest Qubes packages, specifically `qubes-core-vm-2.1.33` or later. This doesn't imply that the VM must already be upgraded to fc20 -- for Dom0 upgrade we could still use an fc18-based VM (updatevm) it is only important to install the latest Qubes packages there. @@ -51,7 +51,7 @@ sudo qubes-dom0-update qubes-dom0-dist-upgrade sudo qubes-dom0-update ~~~ -1. If above step completed successfully you should have `qubes-release-2-9` or later. If not, repeat above step with additional `--clean` option. +4. If above step completed successfully you should have `qubes-release-2-9` or later. If not, repeat above step with additional `--clean` option. 4a. If you chose not to upgrade your fc18 templates, but instead to download our new fc20-based template you should now be able to do that by simply typing: @@ -59,6 +59,6 @@ sudo qubes-dom0-update sudo qubes-dom0-update qubes-template-fedora-20-x64 ~~~ -1. Reboot the system. +5. Reboot the system. Please note that if you use Anti Evil Maid, then it won't be able to unseal the passphrase this time, because the Xen, kernel, and initramfs binaries have changed. Once the system boots up again, you could reseal your Anti Evil Maid's passphrase to the new configuration. Please consult Anti Evil Maid documentation for explanation on how to do that. diff --git a/user/downloading-installing-upgrading/upgrade/2b1.md b/user/downloading-installing-upgrading/upgrade/2b1.md index 0d564fb5b..8c1c0fbf6 100644 --- a/user/downloading-installing-upgrading/upgrade/2b1.md +++ b/user/downloading-installing-upgrading/upgrade/2b1.md @@ -13,7 +13,7 @@ title: Upgrading to R2B1 **Note: Qubes R2 Beta 1 is no longer supported! Please install or upgrade to a newer Qubes R2.** -**Note: This page is kept for historical reasons only! Do not follow the instructions below'''** +**Note: This page is kept for historical reasons only! Do not follow the instructions below** Existing users of Qubes R1 (but not R1 betas!) can upgrade their systems to the latest R2 beta release by following the procedure below. As usual, it is advisable to backup the system before proceeding with the upgrade @@ -51,7 +51,7 @@ By default, in Qubes R1, there is only one template, however users are free to c - via a legitimate RPM package previously installed (in our case it was the `qubes-upgrade-vm` RPM). Such an RPM must have been signed by one of the keys you decided to trust previously, by default this would be either via the Qubes R1 signing key, or Fedora 17 signing key. - via system compromise or via some illegal RPM package (e.g. Fedora released package pretending to bring new Firefox). In that case, however, your VM is already compromised, and it careful checking of the new R2 key would not change this situation to any better one. The game is lost for this VM anyway (and all VMs based on this template). -1. Shut down the VM. +4. Shut down the VM. Upgrade Dom0 ------------ diff --git a/user/downloading-installing-upgrading/upgrade/2b2.md b/user/downloading-installing-upgrading/upgrade/2b2.md index 290204390..727d6d74b 100644 --- a/user/downloading-installing-upgrading/upgrade/2b2.md +++ b/user/downloading-installing-upgrading/upgrade/2b2.md @@ -49,7 +49,7 @@ By default, in Qubes R1, there is only one template, however users are free to c - via a legitimate RPM package previously installed (in our case it was the `qubes-upgrade-vm` RPM). Such an RPM must have been signed by one of the keys you decided to trust previously, by default this would be either via the Qubes R1 signing key, or Fedora 17 signing key. - via system compromise or via some illegal RPM package (e.g. Fedora released package pretending to bring new Firefox). In that case, however, your VM is already compromised, and it careful checking of the new R2 key would not change this situation to any better one. The game is lost for this VM anyway (and all VMs based on this template). -1. Shut down the VM. +4. Shut down the VM. Installing new template ----------------------- diff --git a/user/downloading-installing-upgrading/upgrade/3_0.md b/user/downloading-installing-upgrading/upgrade/3_0.md index 24f477d46..0cb815b33 100644 --- a/user/downloading-installing-upgrading/upgrade/3_0.md +++ b/user/downloading-installing-upgrading/upgrade/3_0.md @@ -101,7 +101,7 @@ Be sure to do steps described in this section after *all* your template and stan 6. Reboot the system. - It may happen that the system hang during the reboot. Hard reset the system in such case, all the filesystems are unmounted at this stage. + - It may happen that the system hang during the reboot. Hard reset the system in such case, all the filesystems are unmounted at this stage. Please note that if you use Anti Evil Maid, then it won't be able to unseal the passphrase this time, because the Xen, kernel, and initramfs binaries have changed. Once the system boots up again, you could reseal your Anti Evil Maid's passphrase to the new configuration. Please consult Anti Evil Maid documentation for explanation on how to do that. diff --git a/user/downloading-installing-upgrading/upgrade/3_1.md b/user/downloading-installing-upgrading/upgrade/3_1.md index 40aff3651..09b67956d 100644 --- a/user/downloading-installing-upgrading/upgrade/3_1.md +++ b/user/downloading-installing-upgrading/upgrade/3_1.md @@ -98,11 +98,11 @@ complete. 4. Reboot dom0. - The system may hang during the reboot. If that happens, do not panic. All - the filesystems will have already been unmounted at this stage, so you can - simply perform a hard reboot (e.g., hold the physical power button down - until the machine shuts off, wait a moment, then press it again to start it - back up). + - The system may hang during the reboot. If that happens, do not panic. All + the filesystems will have already been unmounted at this stage, so you can + simply perform a hard reboot (e.g., hold the physical power button down + until the machine shuts off, wait a moment, then press it again to start it + back up). Please note that if you use [Anti Evil Maid](/doc/anti-evil-maid), it won't be able to unseal the passphrase the first time the system boots after performing diff --git a/user/downloading-installing-upgrading/upgrade/3_2.md b/user/downloading-installing-upgrading/upgrade/3_2.md index ed1e9f9eb..53c87a898 100644 --- a/user/downloading-installing-upgrading/upgrade/3_2.md +++ b/user/downloading-installing-upgrading/upgrade/3_2.md @@ -29,13 +29,13 @@ by following the procedure below. sudo qubes-dom0-update --releasever=3.2 qubes-release ``` - If you made any manual changes to repository definitions, new definitions + - If you made any manual changes to repository definitions, new definitions will be installed as `/etc/yum.repos.d/qubes-dom0.repo.rpmnew` (you'll see a message about it during package installation). In such a case, you need to manually apply the changes to `/etc/yum.repos.d/qubes-dom0.repo` or simply replace it with .rpmnew file. - If you are using Debian-based VM as UpdateVM (`sys-firewall` by default), + - If you are using Debian-based VM as UpdateVM (`sys-firewall` by default), you need to download few more packages manually, but **do not install them** yet: @@ -60,7 +60,7 @@ by following the procedure below. sudo qubes-dom0-update ``` - You may wish to disable the screensaver "Lock screen" feature for this step, as + - You may wish to disable the screensaver "Lock screen" feature for this step, as during the update XScreensaver may encounter an "Authentication failed" issue, requiring a hard reboot. Alternatively, you may simply move the mouse regularly. @@ -70,7 +70,7 @@ by following the procedure below. 6. Update configuration files. - Some of configuration files were saved with `.rpmnew` extension as the + - Some of configuration files were saved with `.rpmnew` extension as the actual files were modified. During upgrade, you'll see information about such cases, like: @@ -78,7 +78,7 @@ by following the procedure below. warning: /etc/salt/minion.d/f_defaults.conf created as /etc/salt/minion.d/f_defaults.conf.rpmnew ``` - This will happen for every configuration you have modified manually and for + - This will happen for every configuration you have modified manually and for a few that has been modified by Qubes scripts. If you are not sure what to do about them, below is a list of commands to deal with few common cases (either keep the old one, or replace with the new one): diff --git a/user/hardware/certified-hardware/certified-hardware.md b/user/hardware/certified-hardware/certified-hardware.md index e3465e622..83cc9ba66 100644 --- a/user/hardware/certified-hardware/certified-hardware.md +++ b/user/hardware/certified-hardware/certified-hardware.md @@ -32,7 +32,7 @@ The current Qubes-certified models are listed below in reverse chronological ord | [NovaCustom](https://novacustom.com/) | [V56 Series](https://novacustom.com/product/v56-series/) | [Certification details](/doc/certified-hardware/novacustom-v56-series/) | | [Nitrokey](https://www.nitrokey.com/) | [NitroPC Pro 2](https://shop.nitrokey.com/shop/nitropc-pro-2-523) | [Certification details](/doc/certified-hardware/nitropc-pro-2/) | | [Star Labs](https://starlabs.systems/) | [StarBook](https://starlabs.systems/pages/starbook) | [Certification details](/doc/certified-hardware/starlabs-starbook/) | -| [Nitrokey](https://www.nitrokey.com/) | [NitroPC Pro](https://shop.nitrokey.com/shop/product/nitropc-pro-523) | [Certification details](/doc/certified-hardware/nitropc-pro/) | +| [Nitrokey](https://www.nitrokey.com/) | [NitroPC Pro](https://web.archive.org/web/20231027112856/https://shop.nitrokey.com/shop/product/nitropc-pro-523) | [Certification details](/doc/certified-hardware/nitropc-pro/) | | [NovaCustom](https://novacustom.com/) | [NV41 Series](https://novacustom.com/product/nv41-series/) | [Certification details](/doc/certified-hardware/novacustom-nv41-series/) | | [3mdeb](https://3mdeb.com/) | [Dasharo FidelisGuard Z690](https://web.archive.org/web/20240917145232/https://shop.3mdeb.com/shop/open-source-hardware/dasharo-fidelisguard-z690-qubes-os-certified/) | [Certification details](/doc/certified-hardware/dasharo-fidelisguard-z690/) | | [Nitrokey](https://www.nitrokey.com/) | [NitroPad T430](https://shop.nitrokey.com/shop/product/nitropad-t430-119) | [Certification details](/doc/certified-hardware/nitropad-t430/) | diff --git a/user/hardware/certified-hardware/dasharo-fidelisguard-z690.md b/user/hardware/certified-hardware/dasharo-fidelisguard-z690.md index 0ded8346f..045523166 100644 --- a/user/hardware/certified-hardware/dasharo-fidelisguard-z690.md +++ b/user/hardware/certified-hardware/dasharo-fidelisguard-z690.md @@ -4,6 +4,7 @@ layout: doc permalink: /doc/certified-hardware/dasharo-fidelisguard-z690/ title: Dasharo FidelisGuard Z690 image: /attachment/posts/dasharo-fidelisguard-z690_2.jpg +ref: 350 --- The [Dasharo FidelisGuard Z690](https://web.archive.org/web/20240917145232/https://shop.3mdeb.com/shop/open-source-hardware/dasharo-fidelisguard-z690-qubes-os-certified/) is [officially certified](/doc/certified-hardware/) for Qubes OS Release 4. diff --git a/user/hardware/certified-hardware/insurgo-privacybeast-x230.md b/user/hardware/certified-hardware/insurgo-privacybeast-x230.md index 1826f7a3f..c86dff52f 100644 --- a/user/hardware/certified-hardware/insurgo-privacybeast-x230.md +++ b/user/hardware/certified-hardware/insurgo-privacybeast-x230.md @@ -4,6 +4,7 @@ layout: doc permalink: /doc/certified-hardware/insurgo-privacybeast-x230/ title: Insurgo PrivacyBeast X230 image: /attachment/site/insurgo-privacybeast-x230.png +ref: 351 ---