Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test case failure: kvm_test failure #106

Closed
newmanwang opened this issue Sep 22, 2018 · 4 comments
Closed

Test case failure: kvm_test failure #106

newmanwang opened this issue Sep 22, 2018 · 4 comments
Assignees
Labels
auto-closed platform: kvm Issue related to the kvm platform stale-issue This issue has not been updated in 120 days. type: bug Something isn't working

Comments

@newmanwang
Copy link
Contributor

newmanwang commented Sep 22, 2018

Sorry for disturbing, I' m trying to run gVisor test suite with bazel test..., but failed on kvm_test case.
I get following logs:

exec ${PAGER:-/usr/bin/less} "$0" || exit 1
Executing tests from //pkg/sentry/platform/kvm:kvm_test
I0922 07:29:08.417950      15 x:0] excluded: virtual [7ffc1eb3b000,7ffc1eb3e000)
I0922 07:29:08.418009      15 x:0] excluded: virtual [7ffc1eb3e000,7ffc1eb40000)
I0922 07:29:08.418069      15 x:0] region: virtual [15c00000,400e800000)
I0922 07:29:08.418077      15 x:0] region: virtual [400e800000,c000000000)
I0922 07:29:08.418083      15 x:0] region: virtual [c1d5621000,c3d5281000)
I0922 07:29:08.418090      15 x:0] region: virtual [c3d5281000,cbd4401000)
I0922 07:29:08.418095      15 x:0] region: virtual [cbd4401000,10bcd001000)
I0922 07:29:08.418100      15 x:0] region: virtual [10bcd001000,30b93001000)
I0922 07:29:08.418106      15 x:0] region: virtual [30b93001000,70b1f001000)
I0922 07:29:08.418111      15 x:0] region: virtual [70b1f001000,f0a37001000)
I0922 07:29:08.418116      15 x:0] region: virtual [f0a37001000,1f0867001000)
I0922 07:29:08.418121      15 x:0] region: virtual [1f0867001000,3f04c7000000)
I0922 07:29:08.418125      15 x:0] region: virtual [3f04c7000000,7efd86ffe000)
I0922 07:29:08.418130      15 x:0] region: virtual [7efd90d30000,7f7d82530000)
I0922 07:29:08.418135      15 x:0] region: virtual [7f7d82530000,7fbd7b130000)
I0922 07:29:08.418140      15 x:0] region: virtual [7fbd7b130000,7fdd77730000)
I0922 07:29:08.418145      15 x:0] region: virtual [7fdd77730000,7fed75a30000)
I0922 07:29:08.418150      15 x:0] region: virtual [7fed75a30000,7ff574bb0000)
I0922 07:29:08.418155      15 x:0] region: virtual [7ff574bb0000,7ff974470000)
I0922 07:29:08.418168      15 x:0] region: virtual [7ff974470000,7ffb740d0000)
I0922 07:29:08.418173      15 x:0] region: virtual [7ffc1eb3b000,7ffc1eb3e000)
I0922 07:29:08.418178      15 x:0] region: virtual [7ffc1eb3e000,7ffc1eb40000)
I0922 07:29:08.418184      15 x:0] region: virtual [7ffc1eb40000,7ffe1e7a0000)
I0922 07:29:08.418191      15 x:0] physicalRegion: virtual [1000,15c00000) => physical [100001000,115c00000)
I0922 07:29:08.418197      15 x:0] physicalRegion: virtual [c000000000,c1d5621000) => physical [180000000,355621000)
I0922 07:29:08.418203      15 x:0] physicalRegion: virtual [7efd86ffe000,7efd90d30000) => physical [386ffe000,390d30000)
I0922 07:29:08.418208      15 x:0] physicalRegion: virtual [7ffb740d0000,7ffc1eb3b000) => physical [3f40d0000,49eb3b000)
I0922 07:29:08.418214      15 x:0] physicalRegion: virtual [7ffe1e7a0000,7ffffffff000) => physical [51e7a0000,6fffff000)
fatal error: shutdown
goroutine 19 [running, locked to thread]:
runtime.throw(0x669424, 0x8)
        GOROOT/src/runtime/panic.go:608 +0x72 fp=0xc000009a20 sp=0xc0000099f0 pc=0x42c252
gvisor.googlesource.com/gvisor/pkg/sentry/platform/kvm.bluepillHandler(0xc000009ac0)
        pkg/sentry/platform/kvm/bluepill_unsafe.go:185 +0x122 fp=0xc000009ab0 sp=0xc000009a20 pc=0x5c4372
runtime: unexpected return pc for gvisor.googlesource.com/gvisor/pkg/sentry/platform/kvm.sighandler called from 0x7efd9021ca70
stack: frame={sp:0xc000009ab0, fp:0xc000009ac0} stack=[0xc00004c000,0xc00004d000)

gvisor.googlesource.com/gvisor/pkg/sentry/platform/kvm.sighandler(0x7, 0x0, 0xc000002000, 0x0, 0x8000, 0x20, 0x3, 0xc00002e6d0, 0x4, 0x12, ...)
        pkg/sentry/platform/kvm/bluepill_amd64.s:79 +0x24 fp=0xc000009ac0 sp=0xc000009ab0 pc=0x5d8064
created by testing.(*T).Run
        GOROOT/src/testing/testing.go:878 +0x353

goroutine 1 [chan receive]:
testing.(*T).Run(0xc000108100, 0x66b598, 0x11, 0x6769b0, 0x4823e6)
        GOROOT/src/testing/testing.go:879 +0x37a
testing.runTests.func1(0xc000108000)
        GOROOT/src/testing/testing.go:1119 +0x78
testing.tRunner(0xc000108000, 0xc0000b9d78)
        GOROOT/src/testing/testing.go:827 +0xbf
testing.runTests(0xc0000a62c0, 0xa385c0, 0xe, 0xe, 0x40c31f)
        GOROOT/src/testing/testing.go:1117 +0x2aa
testing.(*M).Run(0xc0000f2180, 0x0)
        GOROOT/src/testing/testing.go:1034 +0x165
main.main()
        bazel-out/k8-fastbuild/bin/pkg/sentry/platform/kvm/linux_amd64_stripped/kvm_test%/testmain.go:106 +0x202

goroutine 20 [runnable]:
gvisor.googlesource.com/gvisor/pkg/sentry/platform/filemem.(*FileMem).runReclaim(0xc0000cb200)
        pkg/sentry/platform/filemem/filemem.go:403
created by gvisor.googlesource.com/gvisor/pkg/sentry/platform/filemem.newFromFile
        pkg/sentry/platform/filemem/filemem.go:198 +0x16f

Information might help:
docker info:
Containers: 127
Running: 0
Paused: 0
Stopped: 127
Images: 167
Server Version: 17.12.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc runsc
Default Runtime: runc
Init Binary: docker-init
containerd version: 89623f28b87a6004d4b785663257362d1658a729
runc version: b2567b37d7b75eb4cf325b77297b140ea686ce8f
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.13.15-300.fc27.x86_64
Operating System: Linux
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.525GiB
Name: node1
ID: BAJ4:K7CG:MTKH:JP2K:VOH5:4EX2:CBUS:U3VC:RKZN:73G6:XTS2:JVNK
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
HTTP Proxy: http://127.0.0.1:7777/
No Proxy: docker-hub-local.ricequant.com:5000
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
docker-hub-local.ricequant.com:5000
10.233.0.0/18
127.0.0.0/8
Live Restore Enabled: true

Linux node1 4.13.15-300.fc27.x86_64 #1 SMP Tue Nov 21 21:10:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

cpu flags(with vmx on):
fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm cpuid_fault epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts

gVisor head commit: 4094480

@prattmic
Copy link
Member

Could you post one of the complete processor blocks from /proc/cpuinfo (for processor model, etc)?

Is this using nested virtualization? i.e., are you running bazel test inside a VM?

KVM tracepoints may help diagnose this. To collect, run sudo perf record -a -e "kvm:*", then bazel test //pkg/sentry/platform/kvm:kvm_test while perf is running. Then kill perf and post the output of perf script.

@newmanwang
Copy link
Contributor Author

newmanwang commented Sep 25, 2018

Thank you for your reply! I did the test in bare metal not a vm.

/proc/cpuinfo:

processor	: 3
vendor_id	: GenuineIntel
cpu family	: 6
model		: 58
model name	: Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz
stepping	: 9
microcode	: 0x1c
cpu MHz		: 2494.119
cache size	: 3072 KB
physical id	: 0
siblings	: 4
core id		: 1
cpu cores	: 2
apicid		: 3
initial apicid	: 3
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm cpuid_fault epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts
bugs		:
bogomips	: 4988.23
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 48 bits virtua

perf script output for bazel test //pkg/sentry/platform/kvm:kvm_test:

        kvm_test 10555 [001]  6672.171156:       kvm:kvm_update_master_clock: masterclock 0 hostclock tsc offsetmatched 0
        kvm_test 10555 [001]  6672.174087:          kvm:kvm_write_tsc_offset: vcpu=0 prev=0 next=18446741041187171649
        kvm_test 10555 [001]  6672.174090:                 kvm:kvm_track_tsc: vcpu_id 0 masterclock 0 offsetmatched 0 nr_online 1 hostclock tsc
        kvm_test 10555 [001]  6672.174140:          kvm:kvm_write_tsc_offset: vcpu=0 prev=18446741041187171649 next=18446744073709549185
        kvm_test 10555 [001]  6672.174140:                 kvm:kvm_track_tsc: vcpu_id 0 masterclock 0 offsetmatched 0 nr_online 1 hostclock tsc
        kvm_test 10555 [001]  6672.174155:       kvm:kvm_update_master_clock: masterclock 1 hostclock tsc offsetmatched 1
        kvm_test 10555 [001]  6672.174178:                       kvm:kvm_fpu: load
        kvm_test 10555 [001]  6672.174179:                     kvm:kvm_entry: vcpu 0
        kvm_test 10555 [001]  6672.174184:                      kvm:kvm_exit: reason EPT_VIOLATION rip 0x5bbfd0 info 81 0
        kvm_test 10555 [001]  6672.174185:                kvm:kvm_page_fault: address 180100000 error_code 81
        kvm_test 10555 [001]  6672.174193:                     kvm:kvm_entry: vcpu 0
        kvm_test 10555 [001]  6672.174195:                      kvm:kvm_exit: reason EPT_VIOLATION rip 0x5bbfd0 info 81 0
        kvm_test 10555 [001]  6672.174195:                kvm:kvm_page_fault: address 180101000 error_code 81
        kvm_test 10555 [001]  6672.174197:                     kvm:kvm_entry: vcpu 0
        kvm_test 10555 [001]  6672.174198:                      kvm:kvm_exit: reason EPT_VIOLATION rip 0x5bbfd0 info 81 0
        kvm_test 10555 [001]  6672.174199:                kvm:kvm_page_fault: address 180102010 error_code 81
        kvm_test 10555 [001]  6672.174200:                     kvm:kvm_entry: vcpu 0
        kvm_test 10555 [001]  6672.174201:                      kvm:kvm_exit: reason EPT_VIOLATION rip 0x5bbfd0 info 184 0
        kvm_test 10555 [001]  6672.174201:                kvm:kvm_page_fault: address 1005bbfd0 error_code 184
        kvm_test 10555 [001]  6672.174207:                     kvm:kvm_entry: vcpu 0
        kvm_test 10555 [001]  6672.174208:                      kvm:kvm_exit: reason EPT_VIOLATION rip 0x5bbfd0 info 81 0
        kvm_test 10555 [001]  6672.174208:                kvm:kvm_page_fault: address 180107800 error_code 81
        kvm_test 10555 [001]  6672.174210:                     kvm:kvm_entry: vcpu 0
        kvm_test 10555 [001]  6672.174211:                      kvm:kvm_exit: reason EPT_VIOLATION rip 0x5bbfd0 info 81 0
        kvm_test 10555 [001]  6672.174212:                kvm:kvm_page_fault: address 18010a800 error_code 81
        kvm_test 10555 [001]  6672.174213:             kvm:kvm_inj_exception: #PF (0x9)
        kvm_test 10555 [001]  6672.174214:                     kvm:kvm_entry: vcpu 0
        kvm_test 10555 [001]  6672.174216:                      kvm:kvm_exit: reason TRIPLE_FAULT rip 0x5bbfd0 info 0 0
        kvm_test 10555 [001]  6672.174217:            kvm:kvm_userspace_exit: reason KVM_EXIT_SHUTDOWN (8)
        kvm_test 10555 [001]  6672.174218:                       kvm:kvm_fpu: unload
        kvm_test 10555 [002]  6672.177866:         kvm:kvm_hv_stimer_cleanup: vcpu_id 0 timer 0
        kvm_test 10555 [002]  6672.177868:         kvm:kvm_hv_stimer_cleanup: vcpu_id 0 timer 1
        kvm_test 10555 [002]  6672.177869:         kvm:kvm_hv_stimer_cleanup: vcpu_id 0 timer 2
        kvm_test 10555 [002]  6672.177869:         kvm:kvm_hv_stimer_cleanup: vcpu_id 0 timer 3

@fvoznika fvoznika assigned fvoznika and prattmic and unassigned fvoznika Jan 11, 2019
@ianlewis ianlewis added type: bug Something isn't working platform: kvm Issue related to the kvm platform labels Jan 17, 2019
@prattmic prattmic assigned amscanne and unassigned prattmic Apr 3, 2020
@github-actions
Copy link

A friendly reminder that this issue had no activity for 120 days.

@github-actions github-actions bot added the stale-issue This issue has not been updated in 120 days. label Sep 15, 2023
Copy link

This issue has been closed due to lack of activity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto-closed platform: kvm Issue related to the kvm platform stale-issue This issue has not been updated in 120 days. type: bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants