Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cilium_host: hw csum failure #9482

Closed
borkmann opened this issue Oct 23, 2019 · 13 comments · Fixed by #16604
Closed

cilium_host: hw csum failure #9482

borkmann opened this issue Oct 23, 2019 · 13 comments · Fixed by #16604
Assignees
Labels
kind/bug This is a bug in the Cilium logic. kind/community-report This was reported by a user in the Cilium community, eg via Slack. need-more-info More information is required to further debug or fix the issue. needs/triage This issue requires triaging to establish severity and next steps.

Comments

@borkmann
Copy link
Member

borkmann commented Oct 23, 2019

Reported via slack: https://cilium.slack.com/archives/C1MATJ5U5/p1571823875167100

and the OS -- Ubuntu 18.04.2 LTS   kernel: 5.3.0-1004-azure

I'm running this on self hosted cluster in azure 

This is what the configuration looks like
helm template cilium \
    --namespace kube-system \
    --set global.nodePort.enabled=true \
    --set global.hostServices.enabled=true \
    --set global.tunnel=vxlan \
    --set global.tag=v1.6.3 \
> cilium.yaml
[  601.889533] cilium_host: hw csum failure
[  601.894231] skb len=36 headroom=128 headlen=36 tailroom=28
               mac=(114,14) net=(128,20) trans=148
               shinfo(txflags=0 nr_frags=0 gso(size=0 type=0 segs=0))
               csum(0x0 ip_summed=2 complete_sw=0 valid=1 level=0)
               hash(0x5e1df61c sw=0 l4=1) proto=0x0800 pkttype=0 iif=7
[  601.926422] dev name=cilium_host feat=0x0x000030884fdd59e9
[  601.935407] skb linear:   00000000: 45 00 00 24 b3 1a 00 00 3f 01 a5 fc 0b 10 04 ed
[  601.941761] skb linear:   00000010: 0b 10 07 b6 00 00 0a 00 78 73 fc 73 15 d0 23 08
[  601.948612] skb linear:   00000020: 41 5a 06 e6
[  601.952386] CPU: 0 PID: 1673 Comm: oneagentnetwork Not tainted 5.3.0-1004-azure #4-Ubuntu
[  601.952387] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS 090007  06/02/2017
[  601.952388] Call Trace:
[  601.952389]  <IRQ>
[  601.952394]  dump_stack+0x4d/0x6a
[  601.952397]  netdev_rx_csum_fault.part.0+0x41/0x45
[  601.952399]  netdev_rx_csum_fault.cold+0xb/0x10
[  601.952402]  __skb_checksum_complete+0xd9/0xf0
[  601.952405]  ? skb_send_sock_locked+0x270/0x270
[  601.952408]  ? reqsk_fastopen_remove+0x150/0x150
[  601.952412]  nf_ip_checksum+0xe4/0x110
[  601.952422]  nf_conntrack_icmpv4_error+0x13d/0x150 [nf_conntrack]
[  601.952425]  ? kfree_skbmem+0x4e/0x60
[  601.952428]  ? kfree_skb+0x3a/0xa0
[  601.952436]  nf_conntrack_in.cold+0x1d/0x83 [nf_conntrack]
[  601.952440]  ? tcf_classify+0x40/0x100
[  601.952446]  ipv4_conntrack_in+0x14/0x20 [nf_conntrack]
[  601.952449]  nf_hook_slow+0x45/0xb0
[  601.952453]  ip_rcv+0x90/0xd0
[  601.952456]  ? ip_rcv_finish_core.isra.0+0x3c0/0x3c0
[  601.952459]  __netif_receive_skb_one_core+0x87/0xa0
[  601.952461]  __netif_receive_skb+0x18/0x60
[  601.952463]  process_backlog+0x95/0x140
[  601.952467]  net_rx_action+0x12e/0x340
[  601.952470]  ? rcu_core+0x104/0x450
[  601.952473]  __do_softirq+0xdb/0x2ca
[  601.952477]  do_softirq_own_stack+0x2a/0x40
[  601.952478]  </IRQ>
[  601.952481]  do_softirq.part.0+0x30/0x40
[  601.952483]  __local_bh_enable_ip+0x50/0x60
[  601.952485]  _raw_spin_unlock_bh+0x1e/0x20
[  601.952489]  packet_getsockopt+0x255/0x410
[  601.952492]  ? apparmor_socket_getsockopt+0x29/0x50
[  601.952496]  __sys_getsockopt+0x8d/0x120
[  601.952499]  __x64_sys_getsockopt+0x25/0x30
[  601.952502]  do_syscall_64+0x68/0x1d0
[  601.952503]  ? prepare_exit_to_usermode+0x51/0xb0
[  601.952506]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[  601.952508] RIP: 0033:0x7efc4ada09aa
[  601.952510] Code: 48 8b 0d e1 84 2c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 49 89 ca b8 37 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d ae 84 2c 00 f7 d8 64 89 01 48
[  601.952512] RSP: 002b:00007efc4856a568 EFLAGS: 00000246 ORIG_RAX: 0000000000000037
[  601.952514] RAX: ffffffffffffffda RBX: 000056428d4a9d20 RCX: 00007efc4ada09aa
[  601.952515] RDX: 0000000000000006 RSI: 0000000000000107 RDI: 0000000000000009
[  601.952516] RBP: 000056428d4a9ab0 R08: 00007efc4856a574 R09: 000056428d4a9d20
[  601.952517] R10: 00007efc4856a578 R11: 0000000000000246 R12: 000056428cb73b3c
[  601.952518] R13: 0000000000000053 R14: 00007efc4aa1be90 R15: 0000000000000053
@borkmann borkmann added kind/bug This is a bug in the Cilium logic. kind/community-report This was reported by a user in the Cilium community, eg via Slack. labels Oct 23, 2019
@borkmann borkmann self-assigned this Oct 23, 2019
@borkmann
Copy link
Member Author

Could not reproduce on Azure so far. Tested with non-SRIOV and SRIOV networking. Need more info.

@borkmann borkmann added the need-more-info More information is required to further debug or fix the issue. label Oct 28, 2019
@cehoffman
Copy link

cehoffman commented Nov 27, 2019

Since this is on Azure and related to hardware checksums, this may be of interest. coreos/tectonic-installer#1171 (comment)

Basically it was discovered that some subset of VMs provisioned in azure (multiple datacenters) have defects in the hardware checksum implementation. CoreOS at the time developed a test procedure to recreate VMs enough times to finally hit a faulty instance and then iterate on a solution. They had a repository or gist for this, but I didn’t find it just from the issue or relayed MR. The final solution was to disable the hardware checksum for all VMs in azure. I run with that setting to this day in Azure.

Maybe a useful insight too about VMs in Azure and checksum offloading, flannel-io/flannel#790 (comment)

@borkmann
Copy link
Member Author

borkmann commented Nov 28, 2019

@cehoffman interesting, thanks a lot for the pointers! Dissection of above linear data is:

11:43:58.538229 IP (tos 0x0, ttl 63, id 45850, offset 0, flags [none], proto ICMP (1), length 36)
    10.16.4.237 > 10.16.7.182: ICMP echo reply, id 30835, seq 64627, length 16
	0x0000:  4500 0024 b31a 0000 3f01 a5fc 0b10 04ed
	0x0010:  0b10 07b6 0000 0a00 7873 fc73 15d0 2308
	0x0020:  415a 06e6

So this must have come from simple cilium health check.

@borkmann
Copy link
Member Author

borkmann commented Sep 4, 2020

Closing for now due to inactivity and lack of reproduction / more debugging data. If this is still an issue, we'll reopen and reinvestigate.

@borkmann borkmann closed this as completed Sep 4, 2020
@knfoo
Copy link
Contributor

knfoo commented Mar 23, 2021

Hi,
I have run in to something simular.
I am using Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-66-generic x86_64 on baremetal hosts.

[Tue Mar 23 11:58:44 2021] cilium_host: hw csum failure
[Tue Mar 23 11:58:44 2021] cilium_host: hw csum failure
[Tue Mar 23 11:58:44 2021] skb len=36 headroom=128 headlen=36 tailroom=28
                           mac=(114,14) net=(128,20) trans=148                                                                                                                                                     
                           shinfo(txflags=0 nr_frags=0 gso(size=0 type=0 segs=0))                                                                                                                                  
                           csum(0x0 ip_summed=2 complete_sw=0 valid=1 level=0)                                                                                                                                     
                           hash(0xa037709f sw=0 l4=1) proto=0x0800 pkttype=0 iif=5                                                                                                                                 
[Tue Mar 23 11:58:44 2021] skb len=36 headroom=128 headlen=36 tailroom=28
                           mac=(114,14) net=(128,20) trans=148                                                                                                                                                     
                           shinfo(txflags=0 nr_frags=0 gso(size=0 type=0 segs=0))                                                                                                                                  
                           csum(0x0 ip_summed=2 complete_sw=0 valid=1 level=0)                                                                                                                                     
                           hash(0xf069c82d sw=0 l4=1) proto=0x0800 pkttype=0 iif=5                                                                                                                                 
[Tue Mar 23 11:58:44 2021] dev name=cilium_host feat=0x0x000030884fdd59e9
[Tue Mar 23 11:58:44 2021] skb linear:   00000000: 45 00 00 24 23 8c 00 00 3f 01 41 6b 0a 00 00 dd
[Tue Mar 23 11:58:45 2021] dev name=cilium_host feat=0x0x000030884fdd59e9
[Tue Mar 23 11:58:45 2021] skb linear:   00000010: 0a 00 02 06 00 00 cc c9 3a c3 22 c3 16 6e f6 d2
[Tue Mar 23 11:58:45 2021] skb linear:   00000020: 32 a4 95 ca
[Tue Mar 23 11:58:45 2021] skb linear:   00000000: 45 00 00 24 bc 15 00 00 3f 01 a8 95 0a 00 01 29
[Tue Mar 23 11:58:45 2021] CPU: 70 PID: 0 Comm: swapper/70 Not tainted 5.4.0-65-generic #73-Ubuntu
[Tue Mar 23 11:58:45 2021] Hardware name: Wiwynn  SV7220G3/Tioga-Pass Channel, BIOS TPC_P25C 09/28/2020
[Tue Mar 23 11:58:45 2021] skb linear:   00000010: 0a 00 02 06 00 00 89 4a 3a c3 22 c3 16 6e f6 d2
[Tue Mar 23 11:58:45 2021] Call Trace:
[Tue Mar 23 11:58:45 2021]  <IRQ>
[Tue Mar 23 11:58:45 2021]  dump_stack+0x6d/0x9a
[Tue Mar 23 11:58:45 2021]  netdev_rx_csum_fault.part.0+0x41/0x45
[Tue Mar 23 11:58:45 2021]  netdev_rx_csum_fault.cold+0xb/0x10
[Tue Mar 23 11:58:45 2021]  __skb_checksum_complete+0xd9/0xf0
[Tue Mar 23 11:58:45 2021]  ? skb_send_sock_locked+0x280/0x280
[Tue Mar 23 11:58:45 2021] skb linear:   00000020: 32 a4 d9 49
[Tue Mar 23 11:58:45 2021]  ? reqsk_fastopen_remove+0x150/0x150
[Tue Mar 23 11:58:45 2021]  nf_ip_checksum+0xe4/0x110
[Tue Mar 23 11:58:45 2021]  nf_conntrack_icmpv4_error+0x13d/0x150 [nf_conntrack]
[Tue Mar 23 11:58:45 2021]  nf_conntrack_in.cold+0x1d/0x83 [nf_conntrack]
[Tue Mar 23 11:58:45 2021]  ipv4_conntrack_in+0x14/0x20 [nf_conntrack]
[Tue Mar 23 11:58:45 2021]  nf_hook_slow+0x45/0xb0
[Tue Mar 23 11:58:45 2021]  ip_rcv+0x90/0xd0
[Tue Mar 23 11:58:45 2021]  ? ip_rcv_finish_core.isra.0+0x3c0/0x3c0
[Tue Mar 23 11:58:45 2021]  __netif_receive_skb_one_core+0x88/0xa0
[Tue Mar 23 11:58:45 2021]  __netif_receive_skb+0x18/0x60
[Tue Mar 23 11:58:45 2021]  process_backlog+0xa9/0x160
[Tue Mar 23 11:58:45 2021]  net_rx_action+0x13a/0x370
[Tue Mar 23 11:58:45 2021]  __do_softirq+0xe1/0x2d6
[Tue Mar 23 11:58:45 2021]  irq_exit+0xae/0xb0
[Tue Mar 23 11:58:45 2021]  do_IRQ+0x5a/0xf0
[Tue Mar 23 11:58:45 2021]  common_interrupt+0xf/0xf
[Tue Mar 23 11:58:45 2021]  </IRQ>
[Tue Mar 23 11:58:45 2021] RIP: 0010:cpuidle_enter_state+0xc5/0x450
[Tue Mar 23 11:58:45 2021] Code: ff e8 df b6 80 ff 80 7d c7 00 74 17 9c 58 0f 1f 44 00 00 f6 c4 02 0f 85 65 03 00 00 31 ff e8 e2 20 87 ff fb 66 0f 1f 44 00 00 <45> 85 ed 0f 88 8f 02 00 00 49 63 cd 4c 8b 7d d0 4c 2b 7d c8 48 8d
[Tue Mar 23 11:58:45 2021] RSP: 0018:ffff9fb4cc97fe38 EFLAGS: 00000246 ORIG_RAX: ffffffffffffffdc
[Tue Mar 23 11:58:45 2021] RAX: ffff8e6ce0d2adc0 RBX: ffffffff92f59f20 RCX: 000000000000001f
[Tue Mar 23 11:58:45 2021] RDX: 0000000000000000 RSI: 000000003d187b94 RDI: 0000000000000000
[Tue Mar 23 11:58:45 2021] RBP: ffff9fb4cc97fe78 R08: 00004ea4f4e2166a R09: 00000000000384f4
[Tue Mar 23 11:58:45 2021] R10: ffff8e6ce0d29ac0 R11: ffff8e6ce0d29aa0 R12: ffffbf9cc0f06100
[Tue Mar 23 11:58:45 2021] R13: 0000000000000003 R14: 0000000000000003 R15: ffffbf9cc0f06100
[Tue Mar 23 11:58:45 2021]  ? cpuidle_enter_state+0xa1/0x450
[Tue Mar 23 11:58:45 2021]  cpuidle_enter+0x2e/0x40
[Tue Mar 23 11:58:45 2021]  call_cpuidle+0x23/0x40
[Tue Mar 23 11:58:45 2021]  do_idle+0x1dd/0x270
[Tue Mar 23 11:58:45 2021]  cpu_startup_entry+0x20/0x30
[Tue Mar 23 11:58:45 2021]  start_secondary+0x167/0x1c0
[Tue Mar 23 11:58:45 2021]  secondary_startup_64+0xa4/0xb0
[Tue Mar 23 11:58:45 2021] CPU: 11 PID: 0 Comm: swapper/11 Not tainted 5.4.0-65-generic #73-Ubuntu
[Tue Mar 23 11:58:45 2021] Hardware name: Wiwynn  SV7220G3/Tioga-Pass Channel, BIOS TPC_P25C 09/28/2020
[Tue Mar 23 11:58:45 2021] Call Trace:
[Tue Mar 23 11:58:45 2021]  <IRQ>
[Tue Mar 23 11:58:45 2021]  dump_stack+0x6d/0x9a
[Tue Mar 23 11:58:45 2021]  netdev_rx_csum_fault.part.0+0x41/0x45
[Tue Mar 23 11:58:45 2021]  netdev_rx_csum_fault.cold+0xb/0x10
[Tue Mar 23 11:58:45 2021]  __skb_checksum_complete+0xd9/0xf0
[Tue Mar 23 11:58:45 2021]  ? skb_send_sock_locked+0x280/0x280
[Tue Mar 23 11:58:45 2021]  ? reqsk_fastopen_remove+0x150/0x150
[Tue Mar 23 11:58:45 2021]  nf_ip_checksum+0xe4/0x110
[Tue Mar 23 11:58:45 2021]  nf_conntrack_icmpv4_error+0x13d/0x150 [nf_conntrack]
[Tue Mar 23 11:58:45 2021]  nf_conntrack_in.cold+0x1d/0x83 [nf_conntrack]
[Tue Mar 23 11:58:45 2021]  ipv4_conntrack_in+0x14/0x20 [nf_conntrack]
[Tue Mar 23 11:58:45 2021]  nf_hook_slow+0x45/0xb0
[Tue Mar 23 11:58:45 2021]  ip_rcv+0x90/0xd0
[Tue Mar 23 11:58:45 2021]  ? ip_rcv_finish_core.isra.0+0x3c0/0x3c0
[Tue Mar 23 11:58:45 2021]  __netif_receive_skb_one_core+0x88/0xa0
[Tue Mar 23 11:58:45 2021]  __netif_receive_skb+0x18/0x60
[Tue Mar 23 11:58:45 2021]  process_backlog+0xa9/0x160
[Tue Mar 23 11:58:45 2021]  net_rx_action+0x13a/0x370
[Tue Mar 23 11:58:45 2021]  __do_softirq+0xe1/0x2d6
[Tue Mar 23 11:58:45 2021]  irq_exit+0xae/0xb0
[Tue Mar 23 11:58:45 2021]  do_IRQ+0x5a/0xf0
[Tue Mar 23 11:58:45 2021]  common_interrupt+0xf/0xf
[Tue Mar 23 11:58:45 2021]  </IRQ>
[Tue Mar 23 11:58:45 2021] RIP: 0010:cpuidle_enter_state+0xc5/0x450
[Tue Mar 23 11:58:45 2021] Code: ff e8 df b6 80 ff 80 7d c7 00 74 17 9c 58 0f 1f 44 00 00 f6 c4 02 0f 85 65 03 00 00 31 ff e8 e2 20 87 ff fb 66 0f 1f 44 00 00 <45> 85 ed 0f 88 8f 02 00 00 49 63 cd 4c 8b 7d d0 4c 2b 7d c8 48 8d
[Tue Mar 23 11:58:45 2021] RSP: 0018:ffff9fb4cc7a7e38 EFLAGS: 00000246 ORIG_RAX: ffffffffffffffdb
[Tue Mar 23 11:58:45 2021] RAX: ffff8e6ce04eadc0 RBX: ffffffff92f59f20 RCX: 000000000000001f
[Tue Mar 23 11:58:45 2021] RDX: 0000000000000000 RSI: 000000003d187b94 RDI: 0000000000000000
[Tue Mar 23 11:58:45 2021] RBP: ffff9fb4cc7a7e78 R08: 00004ea4f4eb4076 R09: 000000007fffffff
[Tue Mar 23 11:58:45 2021] R10: ffff8e6ce04e9ac0 R11: ffff8e6ce04e9aa0 R12: ffffbf9cc06c6100
[Tue Mar 23 11:58:45 2021] R13: 0000000000000002 R14: 0000000000000002 R15: ffffbf9cc06c6100
[Tue Mar 23 11:58:45 2021]  ? cpuidle_enter_state+0xa1/0x450
[Tue Mar 23 11:58:45 2021]  cpuidle_enter+0x2e/0x40
[Tue Mar 23 11:58:45 2021]  call_cpuidle+0x23/0x40
[Tue Mar 23 11:58:45 2021]  do_idle+0x1dd/0x270
[Tue Mar 23 11:58:45 2021]  cpu_startup_entry+0x20/0x30
[Tue Mar 23 11:58:45 2021]  start_secondary+0x167/0x1c0
[Tue Mar 23 11:58:45 2021]  secondary_startup_64+0xa4/0xb0

Disabling TX checksum on the cilium_host device as suggested in the flannel issue seems to be able to fix the problem.
Disabling TX checksum on that device will that cause problems ?

@pchaigno pchaigno reopened this Mar 23, 2021
@pchaigno pchaigno added the needs/triage This issue requires triaging to establish severity and next steps. label Mar 23, 2021
@knfoo
Copy link
Contributor

knfoo commented Mar 24, 2021

If you need me to provide more information let me know.

@borkmann
Copy link
Member Author

If you need me to provide more information let me know.

@knfoo Do you have a sysdump (or alternatively could you post the full config you are using + kernel + cilium version), and describe the issue in more detail when it is occuring, e.g. what path is the packet traversing etc? The more info we have on this the better we could track down the issue. Thanks for your help!

@knfoo
Copy link
Contributor

knfoo commented Mar 25, 2021

@borkmann Sure I will come with as much information that I can.

First I am new to Cilium - been a calico user for a long time.

I am using metal3.io + clusterctl to deploy my baremetal k8s clusters. The integration is using kubeadm in the backend.

I am currently running k8s 1.20.2, ubuntu 20.04, linux-image-5.4.0-77-generic 5.4.0-77.78
I am running containerd 1.3.3 as runC daemon

I was following the https://docs.cilium.io/en/v1.9/gettingstarted/k8s-install-default/ guide to install cilium as I wanted to get something running kind of fast so that I could play around with.
So I downloaded https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-install.yaml read through it to get a better understanding and applied it without changing anything.

I did not do anything particular to trigger the error - I am assuming that it is the health check that triggers it. The k8s cluster is stale and is not running any kind of workload when it happened.
The frequency seems to be suggesting the same:

dmesg -T| grep "hw csum failure"
[Thu Mar 25 10:06:09 2021] cilium_host: hw csum failure
[Thu Mar 25 10:06:09 2021] cilium_host: hw csum failure
[Thu Mar 25 10:08:09 2021] cilium_host: hw csum failure
[Thu Mar 25 10:08:09 2021] cilium_host: hw csum failure
[Thu Mar 25 10:11:09 2021] cilium_host: hw csum failure
[Thu Mar 25 10:11:09 2021] cilium_host: hw csum failure
[Thu Mar 25 10:15:09 2021] cilium_host: hw csum failure
[Thu Mar 25 10:15:09 2021] cilium_host: hw csum failure
[Thu Mar 25 10:19:09 2021] cilium_host: hw csum failure
[Thu Mar 25 10:19:09 2021] cilium_host: hw csum failure
[Thu Mar 25 10:21:09 2021] cilium_host: hw csum failure
[Thu Mar 25 10:21:09 2021] cilium_host: hw csum failure
[Thu Mar 25 10:22:09 2021] cilium_host: hw csum failure
[Thu Mar 25 10:22:09 2021] cilium_host: hw csum failure
[Thu Mar 25 10:24:09 2021] cilium_host: hw csum failure
[Thu Mar 25 10:24:09 2021] cilium_host: hw csum failure
[Thu Mar 25 10:26:09 2021] cilium_host: hw csum failure
[Thu Mar 25 10:26:09 2021] cilium_host: hw csum failure
[Thu Mar 25 10:32:09 2021] cilium_host: hw csum failure
[Thu Mar 25 10:32:09 2021] cilium_host: hw csum failure
[Thu Mar 25 10:34:09 2021] cilium_host: hw csum failure
[Thu Mar 25 10:34:09 2021] cilium_host: hw csum failure

@jcrowthe
Copy link

My team began seeing this issue across our Azure fleet 30 days ago as well. Is there an update on this ticket?

@kkourt kkourt self-assigned this May 19, 2021
@errordeveloper
Copy link
Contributor

@kkourt are you still planning to work on this?

@jhead-slg
Copy link

I'll comment that this can also be seen on Equinix Metal c3.small.x86 machines that are based on the Supermicro X11SCM-F. It is NOT present on the c3.small.x86 machines based on ASRockRack E3C246D4I-NL.

Supermicro:

# lspci | grep Ethernet
01:00.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx]
01:00.1 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx]

# ethtool -i enp1s0f0
driver: mlx5_core
version: 5.0-0
firmware-version: 14.27.1016 (MT_2420110034)
expansion-rom-version:
bus-info: 0000:01:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes

ASRockRack:

# lspci | grep Ethernet
01:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)
01:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)

# ethtool -i eno1
driver: i40e
version: 2.8.20-k
firmware-version: 6.01 0x80003fa1 1.1853.0
expansion-rom-version:
bus-info: 0000:01:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

@kkourt
Copy link
Contributor

kkourt commented Jun 7, 2021

@kkourt are you still planning to work on this?

@errordeveloper it's on my list, but I have not managed to reproduce it yet.

@borkmann
Copy link
Member Author

Issue should be fixed by: #16604

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug This is a bug in the Cilium logic. kind/community-report This was reported by a user in the Cilium community, eg via Slack. need-more-info More information is required to further debug or fix the issue. needs/triage This issue requires triaging to establish severity and next steps.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants