You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On the host computer, I check the card information through ibstatus, and the link_layer is all Ethernet
After using k8s rdma shared dev plugin to map to pod, the following screenshot of the information can be seen through ibstatus, indicating that the Ethernet card is not correctly recognized
Is it something I configured incorrectly, or is this plugin not supporting Ethernet type cards
The text was updated successfully, but these errors were encountered:
im not sure running ibstatus in container is a indication of a correct configuration.
rdma shared device plugin will mount rdma char devices to container (under /dev/infiniband)
to use RoCE you will need a network device associated with the same NIC made available in container.
you can use multus + macvlan cni (using the above device as "master") and provide the container an additional network interface.
The ROCE mode IB network card can work normally inside the container。default gid value should come from /sys/class/infiniband/mlx5_0/ports/1/gids/0, but effective gid_index in the container is not starting from 0.
On the host computer, I check the card information through ibstatus, and the link_layer is all Ethernet
After using k8s rdma shared dev plugin to map to pod, the following screenshot of the information can be seen through ibstatus, indicating that the Ethernet card is not correctly recognized
Is it something I configured incorrectly, or is this plugin not supporting Ethernet type cards
The text was updated successfully, but these errors were encountered: