Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: [SYCL] linker fails with undefined reference to symbol #9490

Closed
qnixsynapse opened this issue Sep 15, 2024 · 3 comments
Closed

Bug: [SYCL] linker fails with undefined reference to symbol #9490

qnixsynapse opened this issue Sep 15, 2024 · 3 comments
Labels
bug-unconfirmed high severity Used to report high severity bugs in llama.cpp (Malfunctioning hinder important workflow)

Comments

@qnixsynapse
Copy link
Contributor

qnixsynapse commented Sep 15, 2024

What happened?

With the latest master branch, the linker fails with "undefined reference to symbol '_ZNK4sycl3_V16device8get_infoINS0_3ext5intel4info6device9device_idEEENS0_6detail19is_device_info_descIT_E11return_typeEv'"

Name and Version

Latest Master; commit hash: 822b632
intel-oneapi-basekit: 2024.1.0.596-3
Intel Compute Runtime: 24.31.30508.7-1

What operating system are you seeing the problem on?

Linux

Relevant log output

[100%] Linking CXX executable ../../bin/llama-server
cd /home/qnixsynapse/Public/Projects/llama.cpp/build/examples/server && /usr/bin/cmake -E cmake_link_script CMakeFiles/llama-server.dir/link.txt --verbose=1
/opt/intel/oneapi/compiler/2024.1/bin/icpx -O3 -DNDEBUG "CMakeFiles/llama-server.dir/server.cpp.o" -o ../../bin/llama-server  ../../common/libcommon.a ../../src/libllama.a ../../ggml/src/libggml.a /opt/intel/oneapi/compiler/2024.1/lib/libiomp5.so /usr/lib64/libpthread.a /opt/intel/oneapi/dnnl/2024.1/lib/libdnnl.so.3.4 -lOpenCL -lpthread /opt/intel/oneapi/tbb/2021.12/lib/intel64/gcc4.8/libtbb.so.12 -lOpenCL -lmkl_core -lpthread -lm -ldl -lmkl_sycl_blas -lmkl_intel_ilp64 -lmkl_tbb_thread -lm
/usr/bin/ld: ../../ggml/src/libggml.a(ggml-sycl.cpp.o): undefined reference to symbol '_ZNK4sycl3_V16device8get_infoINS0_3ext5intel4info6device9device_idEEENS0_6detail19is_device_info_descIT_E11return_typeEv'
/usr/bin/ld: /opt/intel/oneapi/compiler/2024.1/lib/libsycl.so.7: error adding symbols: DSO missing from command line
icpx: error: linker command failed with exit code 1 (use -v to see invocation)
make[3]: *** [examples/server/CMakeFiles/llama-server.dir/build.make:169: bin/llama-server] Error 1
make[3]: Leaving directory '/home/qnixsynapse/Public/Projects/llama.cpp/build'
make[2]: *** [CMakeFiles/Makefile2:3277: examples/server/CMakeFiles/llama-server.dir/all] Error 2
make[2]: Leaving directory '/home/qnixsynapse/Public/Projects/llama.cpp/build'
make[1]: *** [CMakeFiles/Makefile2:3284: examples/server/CMakeFiles/llama-server.dir/rule] Error 2
make[1]: Leaving directory '/home/qnixsynapse/Public/Projects/llama.cpp/build'
make: *** [Makefile:1297: llama-server] Error 2
@qnixsynapse qnixsynapse added bug-unconfirmed high severity Used to report high severity bugs in llama.cpp (Malfunctioning hinder important workflow) labels Sep 15, 2024
@airMeng
Copy link
Collaborator

airMeng commented Sep 16, 2024

should be fixed in #9497, feel free to reopen

@airMeng airMeng closed this as completed Sep 16, 2024
@qnixsynapse
Copy link
Contributor Author

Thank you!

@qnixsynapse
Copy link
Contributor Author

qnixsynapse commented Sep 16, 2024

Builds correctly but crashed at runtime with this error: No kernel named _ZTSZL14scale_f32_syclPKfPffiPN4sycl3_V15queueEEUlNS3_7nd_itemILi3EEEE_ was found -46 (PI_ERROR_INVALID_KERNEL_NAME) Exception caught at file:ggml/src/ggml-sycl.cpp, line:2741"

Edit: Looks like it can't find any kernels: TANH(type=f32,ne_a=[128,2,2,2],v=0): No kernel named _ZTSZL13tanh_f32_syclPKfPfiPN4sycl3_V15queueEEUlNS3_7nd_itemILi3EEEE_ was found -46 (PI_ERROR_INVALID_KERNEL_NAME)

Edit: Had to turn on SHARED_LIBS to fix it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-unconfirmed high severity Used to report high severity bugs in llama.cpp (Malfunctioning hinder important workflow)
Projects
None yet
Development

No branches or pull requests

2 participants