Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does k8s rdma shared dev plugin support Ethernet network cards with link_layer? #101

Open
Eisenhower opened this issue Mar 13, 2024 · 2 comments

Comments

@Eisenhower
Copy link

On the host computer, I check the card information through ibstatus, and the link_layer is all Ethernet
image

After using k8s rdma shared dev plugin to map to pod, the following screenshot of the information can be seen through ibstatus, indicating that the Ethernet card is not correctly recognized
image

Is it something I configured incorrectly, or is this plugin not supporting Ethernet type cards

@adrianchiris
Copy link
Collaborator

adrianchiris commented Mar 13, 2024

im not sure running ibstatus in container is a indication of a correct configuration.

rdma shared device plugin will mount rdma char devices to container (under /dev/infiniband)
to use RoCE you will need a network device associated with the same NIC made available in container.

you can use multus + macvlan cni (using the above device as "master") and provide the container an additional network interface.

@Kyrie336
Copy link

The ROCE mode IB network card can work normally inside the container。default gid value should come from
/sys/class/infiniband/mlx5_0/ports/1/gids/0, but effective gid_index in the container is not starting from 0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants