Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kernel: enable Intel VMD driver for metal variants #3419

Conversation

markusboehme
Copy link
Member

Issue number: n/a

Description of changes: Enabling the Intel Volume Management Device driver for metal variants lets Bottlerocket boot on hosts that have a root disk in a separate PCI domain.

Testing done: A Siemens SIMATIC IPC BX-39A with an NVMe drive failed to boot without this, being unable to discover the partitions needed to assemble the dm-verity root. With this change the drive was found and the host booted.

diff-kernel-config:

config-aarch64-aws-dev-diff:              0 removed,   0 added,   0 changed
config-aarch64-aws-k8s-1.23-diff:         0 removed,   0 added,   0 changed
config-aarch64-aws-k8s-1.27-diff:         0 removed,   0 added,   0 changed
config-aarch64-metal-dev-diff:            0 removed,   0 added,   0 changed
config-x86_64-aws-dev-diff:               0 removed,   0 added,   0 changed
config-x86_64-aws-k8s-1.23-diff:          0 removed,   0 added,   0 changed
config-x86_64-aws-k8s-1.27-diff:          0 removed,   0 added,   0 changed
config-x86_64-metal-dev-diff:             0 removed,   0 added,   1 changed
config-x86_64-metal-k8s-1.23-diff:        0 removed,   0 added,   1 changed
config-x86_64-metal-k8s-1.27-diff:        0 removed,   0 added,   1 changed
[...removed variants without changes...]
==> /home/fedora/kernel-diff-vmd/config-x86_64-metal-dev-diff <==
 VMD n -> y

==> /home/fedora/kernel-diff-vmd/config-x86_64-metal-k8s-1.23-diff <==
 VMD n -> y

==> /home/fedora/kernel-diff-vmd/config-x86_64-metal-k8s-1.27-diff <==
 VMD n -> y

Terms of contribution:

By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

Enabling the Intel Volume Management Device driver for metal variants
lets Bottlerocket boot on hosts that have a root disk in a separate PCI
domain.

Signed-off-by: Markus Boehme <[email protected]>
Copy link
Contributor

@stmcginnis stmcginnis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. In case anyone else is curious:

Intel® VMD is the new way to configure 11th Generation and greater Intel® Core™ Processor-based platforms for Intel® RST management of RAID and Intel® Optane™ memory volumes.

I wonder if we need to also include their RST driver, but based on the test results it looks like this at least addresses the current use case. So a definite improvement!

@foersleo
Copy link
Contributor

foersleo commented Sep 6, 2023

I wonder if we need to also include their RST driver, but based on the test results it looks like this at least addresses the current use case. So a definite improvement!

I am not quite sure if Intel has released different technologies under the name RST. Rapid Storage Technology (as in the linked document) and Rapid Start Technology (as the INTEL_RST driver that is available in Linux since 5.15). From a glance they do not seem to be the same, with one being the firmware based RAID solution also covered by VMD and one being a hybrid sleep system to do firmware assisted hibernation like sleep (?).

Either way the document linked claims that the Rapid Storage Technology one is not available for Linux. So probably something we should have a closer look at to confirm whether we can and should or can not add that to Bottlerocket. Agreed that is no blocker for this one change here.

@markusboehme markusboehme merged commit d6e592a into bottlerocket-os:develop Sep 6, 2023
48 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants