From 12f40a9e03d3ba6c44dbc43da16435fedecb650f Mon Sep 17 00:00:00 2001 From: Ouadie EL FAROUKI Date: Mon, 18 Mar 2024 11:10:02 +0000 Subject: [PATCH 01/16] Revisited Readme-sycl --- README-sycl.md | 448 ++++++++++++++++++++++++------------------------- 1 file changed, 215 insertions(+), 233 deletions(-) diff --git a/README-sycl.md b/README-sycl.md index 9359a94901677..9127796ad74ef 100644 --- a/README-sycl.md +++ b/README-sycl.md @@ -3,7 +3,7 @@ - [Background](#background) - [News](#news) - [OS](#os) -- [Intel GPU](#intel-gpu) +- [Supported Devices](#supported-devices) - [Docker](#docker) - [Linux](#linux) - [Windows](#windows) @@ -14,17 +14,25 @@ ## Background -SYCL is a higher-level programming model to improve programming productivity on various hardware accelerators—such as CPUs, GPUs, and FPGAs. It is a single-source embedded domain-specific language based on pure C++17. +**SYCL** is a high-level parallel programming model designed to improve developers productivity writing code across various hardware accelerators such as CPUs, GPUs, and FPGAs. It is a single-source language designed for heterogeneous computing and based on standard C++17. -oneAPI is a specification that is open and standards-based, supporting multiple architecture types including but not limited to GPU, CPU, and FPGA. The spec has both direct programming and API-based programming paradigms. +**oneAPI** is an open ecosystem and a standard-based specification, supporting multiple architectures including but not limited to intel CPUs, GPUs and FPGAs. The key components of the oneAPI ecosystem include : -Intel uses the SYCL as direct programming language to support CPU, GPUs and FPGAs. +- **DPCPP** *(Data Parallel C++)* : The primary oneAPI SYCL implementation, which includes the icpx/icx Compilers. +- **oneAPI Libraries** : A set of highly optimized libraries targeting multiple domains *(e.g. oneMKL - Math Kernel Library)*. +- **oneAPI LevelZero** : A high performance low level interface for fine-grained control over intel iGPUs and dGPUs. +- **Nvidia & AMD Plugins** : These are plugins extending oneAPI's DPCPP support to SYCL on Nvidia and AMD GPU targets. -To avoid to re-invent the wheel, this code refer other code paths in llama.cpp (like OpenBLAS, cuBLAS, CLBlast). We use a open-source tool [SYCLomatic](https://github.com/oneapi-src/SYCLomatic) (Commercial release [Intel® DPC++ Compatibility Tool](https://www.intel.com/content/www/us/en/developer/tools/oneapi/dpc-compatibility-tool.html)) migrate to SYCL. +### Llama.cpp + SYCL +To avoid re-inventing the wheel, this SYCL "backend" follows the same design found in other llama.cpp BLAS-based paths such as * OpenBLAS, cuBLAS, CLBlast etc..*. The oneAPI's [SYCLomatic](https://github.com/oneapi-src/SYCLomatic) open-source migration tool (Commercial release [Intel® DPC++ Compatibility Tool](https://www.intel.com/content/www/us/en/developer/tools/oneapi/dpc-compatibility-tool.html)) was used for this purpose. -The llama.cpp for SYCL is used to support Intel GPUs. +The llama.cpp for SYCL is used to support: +- Intel GPUs. +- Nvidia GPUs. -For Intel CPU, recommend to use llama.cpp for X86 (Intel MKL building). +*Upcoming support : AMD GPUs*. + +For **Intel CPUs**, it is recommend to use llama.cpp for [x86](README.md#intel-onemkl) approach. ## News @@ -49,9 +57,16 @@ For Intel CPU, recommend to use llama.cpp for X86 (Intel MKL building). |Windows|Support|Windows 11| -## Intel GPU +## Supported devices -### Verified +### intel GPUs + +The BLAS acceleration oneAPI Math Kernel Library which comes with the oneAPI base-toolkit natively supports intel GPUs. In order to make it "visible" while building/running llama.cpp, simply run the following : +```sh +source /opt/intel/oneapi/setvars.sh +``` + +- **Tested devices** |Intel GPU| Status | Verified Model| |-|-|-| @@ -61,198 +76,219 @@ For Intel CPU, recommend to use llama.cpp for X86 (Intel MKL building). |Intel built-in Arc GPU| Support| built-in Arc GPU in Meteor Lake| |Intel iGPU| Support| iGPU in i5-1250P, i7-1260P, i7-1165G7| -Note: If the EUs (Execution Unit) in iGPU is less than 80, the inference speed will be too slow to use. - -### Memory +*Notes :* -The memory is a limitation to run LLM on GPUs. +- Device memory can be a limitation when running a large model on an intel GPU. The loaded model size, *`llm_load_tensors : buffer_size`*, is displayed in the log when running `./bin/main` -When run llama.cpp, there is print log to show the applied memory on GPU. You could know how much memory to be used in your case. Like `llm_load_tensors: buffer size = 3577.56 MiB`. +- Please make sure the GPU shared memory from the host is large enough to account for the model's size. For e.g. the *llama-2-7b.Q4_0* requires at least 8.0GB for integrated GPUs and 4.0GB for discrete GPUs. -For iGPU, please make sure the shared memory from host memory is enough. For llama-2-7b.Q4_0, recommend the host memory is 8GB+. +- If the iGPU has less than 80 EUs *(Execution Unit)*, the inference speed will likely be too slow for practical use. -For dGPU, please make sure the device memory is enough. For llama-2-7b.Q4_0, recommend the device memory is 4GB+. +### Nvidia GPUs +The BLAS acceleration on Nvidia GPUs through oneAPI can be obtained using the Nvidia plugins for oneAPI and the cuBLAS backend of the upstream oneMKL library. Details and instructions on how to setup the runtime and library can be found in [this section](#i-setup-environment) -## Nvidia GPU +Math Kernel Library which comes with the oneAPI base-toolkit natively supports intel GPUs. In order to make it "visible" while building/running llama.cpp, simply run the following : -### Verified +- **Tested devices** -|Intel GPU| Status | Verified Model| +|Nvidia GPU| Status | Verified Model| |-|-|-| -|Ampere Series| Support| A100| +|Ampere Series| Support| A100, A4000| +|Ampere Series *(Mobile)*| Support| RTX 40 Series -### oneMKL +*Notes :* + - Support for Nvidia targets through oneAPI is currently limited to Linux platforms. -The current oneMKL release does not contain the oneMKL cuBlas backend. -As a result for Nvidia GPU's oneMKL must be built from source. + - Please make sure the native oneAPI MKL *(dedicated to intel CPUs and GPUs)* is not "visible" at this stage to properly setup and use the built-from-source oneMKL with cuBLAS backend in llama.cpp for Nvidia GPUs. -``` -git clone https://github.com/oneapi-src/oneMKL -cd oneMKL -mkdir build -cd build -cmake -G Ninja .. -DCMAKE_CXX_COMPILER=icpx -DCMAKE_C_COMPILER=icx -DENABLE_MKLGPU_BACKEND=OFF -DENABLE_MKLCPU_BACKEND=OFF -DENABLE_CUBLAS_BACKEND=ON -ninja -// Add paths as necessary -``` ## Docker - -Note: -- Only docker on Linux is tested. Docker on WSL may not work. -- You may need to install Intel GPU driver on the host machine (See the [Linux](#linux) section to know how to do that) - -### Build the image - -You can choose between **F16** and **F32** build. F16 is faster for long-prompt inference. - - +The docker build option is currently limited to *intel GPU* targets. +### Build image ```sh -# For F16: -#docker build -t llama-cpp-sycl --build-arg="LLAMA_SYCL_F16=ON" -f .devops/main-intel.Dockerfile . - -# Or, for F32: -docker build -t llama-cpp-sycl -f .devops/main-intel.Dockerfile . - -# Note: you can also use the ".devops/main-server.Dockerfile", which compiles the "server" example +docker build -t llama-cpp-sycl --build-arg="LLAMA_SYCL_F16=[OFF|ON]" -f .devops/main-intel.Dockerfile . ``` -### Run +*Note* : you can also use the `.devops/server-intel.Dockerfile`, which builds the *"server"* alternative. + +### Run container ```sh -# Firstly, find all the DRI cards: +# First, find all the DRI cards ls -la /dev/dri -# Then, pick the card that you want to use. - -# For example with "/dev/dri/card1" +# Then, pick the card that you want to use (here for e.g. /dev/dri/card1). docker run -it --rm -v "$(pwd):/app:Z" --device /dev/dri/renderD128:/dev/dri/renderD128 --device /dev/dri/card1:/dev/dri/card1 llama-cpp-sycl -m "/app/models/YOUR_MODEL_FILE" -p "Building a website can be done in 10 simple steps:" -n 400 -e -ngl 33 ``` +*Notes :* +- Docker have been tested succefully on native Linux. WSL support has not been verified yet. +- You may need to install Intel GPU driver on the **host** machine *(Please refer to the [Linux configuration](#linux) for details)*. + ## Linux -### Setup Environment +### I. Setup Environment + +1. **Install GPU drivers** -1. Install Intel GPU driver. + - **Intel GPU** -a. Please install Intel GPU driver by official guide: [Install GPU Drivers](https://dgpu-docs.intel.com/driver/installation.html). +Intel data center GPUs drivers installation guide and download page can be found here : [Get intel dGPU Drivers](https://dgpu-docs.intel.com/driver/installation.html#ubuntu-install-steps). -Note: for iGPU, please install the client GPU driver. +*Note* : for client GPUs *(iGPU & Arc A-Series)*, please refer to the [client iGPU driver installation](https://dgpu-docs.intel.com/driver/client/overview.html). -b. Add user to group: video, render. +Once installed, please add user(s) to group: `video`, `render`. ```sh -sudo usermod -aG render username -sudo usermod -aG video username +sudo usermod -aG render +sudo usermod -aG video ``` -Note: re-login to enable it. +*Note* : logout/re-login for the changes to take effect. -c. Check +Verify installation through `clinfo`: ```sh sudo apt install clinfo sudo clinfo -l ``` -Output (example): +Sample output: -``` +```sh Platform #0: Intel(R) OpenCL Graphics `-- Device #0: Intel(R) Arc(TM) A770 Graphics - Platform #0: Intel(R) OpenCL HD Graphics `-- Device #0: Intel(R) Iris(R) Xe Graphics [0x9a49] ``` -2. Install Intel® oneAPI Base toolkit. +- **Nvidia GPU** + +In order to target Nvidia GPUs through SYCL, please make sure the CUDA/CUBLAS native requirements *-found [here](README.md#cublas)-* are installed. +Installation can be verified by running the following : +```sh +nvidia-smi +``` +Please make sure at least one CUDA device is available, which can be displayed like this *(here an A100-40GB Nvidia GPU)* : +``` ++---------------------------------------------------------------------------------------+ +| NVIDIA-SMI 535.54.03 Driver Version: 535.54.03 CUDA Version: 12.2 | +|-----------------------------------------+----------------------+----------------------+ +| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | +| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | +| | | MIG M. | +|=========================================+======================+======================| +| 0 NVIDIA A100-PCIE-40GB On | 00000000:8D:00.0 Off | 0 | +| N/A 36C P0 57W / 250W | 4MiB / 40960MiB | 0% Default | +| | | Disabled | ++-----------------------------------------+----------------------+----------------------+ +``` + -a. Please follow the procedure in [Get the Intel® oneAPI Base Toolkit ](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html). +2. **Install Intel® oneAPI Base toolkit** -Recommend to install to default folder: **/opt/intel/oneapi**. +- **Base installation** -Following guide use the default folder as example. If you use other folder, please modify the following guide info with your folder. +The base toolkit can be obtained from the official [Intel® oneAPI Base Toolkit ](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html) page. -b. Check +Please follow the instructions for downloading and installing the Toolkit for Linux, and preferably keep the default installation values unchanged, notably the installation path *(`/opt/intel/oneapi` by default)*. + +Following guidelines/code snippets assume the default installation values. Otherwise, please make sure the necessary changes are reflected where applicable. + +Upon a successful installation, SYCL is enabled for the available intel devices, along with relevant libraries such as oneAPI MKL for intel GPUs. + +- **Bringing support to Nvidia GPUs** + +**oneAPI** : In order to enable SYCL support on Nvidia GPUs through oneAPI, please install the [Codeplay oneAPI Plugin for Nvidia GPUs](https://developer.codeplay.com/products/oneapi/nvidia/download). User should also make sure the plugin version matches the installed base toolkit one *(previous step)* for a seamless "oneAPI on Nvidia GPU" setup. + + +**oneMKL** : The current oneMKL releases *(shipped with the oneAPI base-toolkit)* does not contain the cuBLAS backend. A build from source of the upstream [oneMKL](https://github.com/oneapi-src/oneMKL) with the *cuBLAS* backend enabled is thus required to run it on Nvidia GPUs. ```sh -source /opt/intel/oneapi/setvars.sh +git clone https://github.com/oneapi-src/oneMKL +cd oneMKL +mkdir -p buildWithCublas && cd buildWithCublas +cmake ../ -DCMAKE_CXX_COMPILER=icpx -DCMAKE_C_COMPILER=icx -DENABLE_MKLGPU_BACKEND=OFF -DENABLE_MKLCPU_BACKEND=OFF -DENABLE_CUBLAS_BACKEND=ON -DTARGET_DOMAINS=blas +make +``` + +3. **Verify installation and environment** + +In order to check the available SYCL devices on the machine, please use the `sycl-ls` command. +```sh +source /opt/intel/oneapi/setvars.sh sycl-ls ``` -There should be one or more level-zero devices. Please confirm that at least one GPU is present, like **[ext_oneapi_level_zero:gpu:0]**. +- **Intel GPU** + +When targeting an intel GPU, the user should expect one or more level-zero devices among the available SYCL devices. Please make sure that at least one GPU is present, for instance [`ext_oneapi_level_zero:gpu:0`] in the sample output below : -Output (example): ``` [opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2 [2023.16.10.0.17_160000] [opencl:cpu:1] Intel(R) OpenCL, 13th Gen Intel(R) Core(TM) i7-13700K OpenCL 3.0 (Build 0) [2023.16.10.0.17_160000] [opencl:gpu:2] Intel(R) OpenCL Graphics, Intel(R) Arc(TM) A770 Graphics OpenCL 3.0 NEO [23.30.26918.50] [ext_oneapi_level_zero:gpu:0] Intel(R) Level-Zero, Intel(R) Arc(TM) A770 Graphics 1.3 [1.3.26918] - ``` -2. Build locally: +- **Nvidia GPU** -Note: -- You can choose between **F16** and **F32** build. F16 is faster for long-prompt inference. -- By default, it will build for all binary files. It will take more time. To reduce the time, we recommend to build for **example/main** only. +Similarly, user targetting Nvidia GPUs should expect at least one SYCL-CUDA device [`ext_oneapi_cuda:gpu`] as bellow : +``` +[opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2 [2023.16.12.0.12_195853.xmain-hotfix] +[opencl:cpu:1] Intel(R) OpenCL, Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz OpenCL 3.0 (Build 0) [2023.16.12.0.12_195853.xmain-hotfix] +[ext_oneapi_cuda:gpu:0] NVIDIA CUDA BACKEND, NVIDIA A100-PCIE-40GB 8.0 [CUDA 12.2] +``` +### II. Build llama.cpp + +#### Intel GPU ```sh -mkdir -p build -cd build +# Export relevant ENV variables source /opt/intel/oneapi/setvars.sh -# For FP16: -#cmake .. -DLLAMA_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DLLAMA_SYCL_F16=ON - -# Or, for FP32: -cmake .. -DLLAMA_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx - -# For Nvidia GPUs -cmake .. -DLLAMA_SYCL=ON -DLLAMA_SYCL_TARGET=NVIDIA -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx - -# Build example/main only -#cmake --build . --config Release --target main - -# Or, build all binary -cmake --build . --config Release -v - -cd .. +# Build LLAMA with MKL BLAS acceleration for intel GPU +mkdir -p build && cd build +cmake .. -DLLAMA_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DLLAMA_SYCL_F16=[OFF|ON] ``` -or - +#### Nvidia GPU ```sh -./examples/sycl/build.sh +# Export relevant ENV variables +export LD_LIBRARY_PATH=/path/to/oneMKL/buildWithCublas/lib:$LD_LIBRARY_PATH +export LIBRARY_PATH=/path/to/oneMKL/buildWithCublas/lib:$LIBRARY_PATH +export CPLUS_INCLUDE_DIR=/path/to/oneMKL/buildWithCublas/include:$CPLUS_INCLUDE_DIR +export CPLUS_INCLUDE_DIR=/path/to/oneMKL/include:$CPLUS_INCLUDE_DIR + +# Build LLAMA with Nvidia BLAS acceleration through SYCL +mkdir -p build && cd build +cmake .. -DLLAMA_SYCL=ON -DLLAMA_SYCL_TARGET=NVIDIA -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx ``` -### Run +*Notes :* +- The **F32** build is enabled by default, but the **F16** yields better performance for long-prompt inference. -1. Put model file to folder **models** +### III. Run the inference -You could download [llama-2-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-7B-GGUF/blob/main/llama-2-7b.Q4_0.gguf) as example. +1. Retrieve and prepare model + +You can refer to the general [*Prepare and Quantize*](README#prepare-and-quantize) guide for model prepration, or simply download [llama-2-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-7B-GGUF/blob/main/llama-2-7b.Q4_0.gguf) model as example. 2. Enable oneAPI running environment -``` +```sh source /opt/intel/oneapi/setvars.sh ``` -3. List device ID +3. List devices information -Run without parameter: +Similar to the native `sycl-ls`, available SYCL devices can be queried as follow : ```sh ./build/bin/ls-sycl-device - -# or running the "main" executable and look at the output log: - -./build/bin/main ``` - -Check the ID in startup log, like: - +A example of such log in a system with 1 *intel CPU* and 1 *intel GPU* can look like the following : ``` found 4 SYCL devices: Device 0: Intel(R) Arc(TM) A770 Graphics, compute capability 1.3, @@ -263,83 +299,69 @@ found 4 SYCL devices: max compute_units 24, max work group size 8192, max sub group size 64, global mem size 67065057280 Device 3: Intel(R) Arc(TM) A770 Graphics, compute capability 3.0, max compute_units 512, max work group size 1024, max sub group size 32, global mem size 16225243136 - ``` |Attribute|Note| |-|-| -|compute capability 1.3|Level-zero running time, recommended | -|compute capability 3.0|OpenCL running time, slower than level-zero in most cases| +|compute capability 1.3|Level-zero driver/runtime, recommended | +|compute capability 3.0|OpenCL driver/runtime, slower than level-zero in most cases| -4. Set device ID and execute llama.cpp +4. Launch inference -Set device ID = 0 by **GGML_SYCL_DEVICE=0** +For instance, in order to target the SYCL device with *ID*=0 *(log from previous command)*, we simply specify `GGML_SYCL_DEVICE=0`. ```sh GGML_SYCL_DEVICE=0 ./build/bin/main -m models/llama-2-7b.Q4_0.gguf -p "Building a website can be done in 10 simple steps:" -n 400 -e -ngl 33 ``` -or run by script: + +Otherwise, you can run the script : ```sh ./examples/sycl/run_llama2.sh ``` -Note: - -- By default, mmap is used to read model file. In some cases, it leads to the hang issue. Recommend to use parameter **--no-mmap** to disable mmap() to skip this issue. - +*Notes :* -5. Check the device ID in output - -Like: -``` -Using device **0** (Intel(R) Arc(TM) A770 Graphics) as main device -``` +- By default, `mmap` is used to read model file. In some cases, it causes runtime hang issues. Please disable it by passing `--no-mmap` to the `/bin/main` if faced with the issue. ## Windows -### Setup Environment +### I. Setup Environment -1. Install Intel GPU driver. +1. Install GPU driver -Please install Intel GPU driver by official guide: [Install GPU Drivers](https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/arc/software/drivers.html). +Intel GPU drivers instructions guide and download page can be found here : [Get intel GPU Drivers](https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/arc/software/drivers.html). -Note: **The driver is mandatory for compute function**. +2. Install Visual Studio -2. Install Visual Studio. +If you already have a recent version of Microsoft Visual Studio, you can skip this tep. Otherwise, please refer to the official download page for [Microsoft Visual Studio](https://visualstudio.microsoft.com/). -Please install [Visual Studio](https://visualstudio.microsoft.com/) which impact oneAPI environment enabling in Windows. +3. Install Intel® oneAPI Base toolkit -3. Install Intel® oneAPI Base toolkit. +The base toolkit can be obtained from the official [Intel® oneAPI Base Toolkit ](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html) page. -a. Please follow the procedure in [Get the Intel® oneAPI Base Toolkit ](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html). +Please follow the instructions for downloading and installing the Toolkit for Windows, and preferably keep the default installation values unchanged, notably the installation path *(`C:\Program Files (x86)\Intel\oneAPI` by default)*. -Recommend to install to default folder: **C:\Program Files (x86)\Intel\oneAPI**. - -Following guide uses the default folder as example. If you use other folder, please modify the following guide info with your folder. +Following guidelines/code snippets assume the default installation values. Otherwise, please make sure the necessary changes are reflected where applicable. b. Enable oneAPI running environment: -- In Search, input 'oneAPI'. - -Search & open "Intel oneAPI command prompt for Intel 64 for Visual Studio 2022" +- Type "oneAPI" in the search bar, then open the `Intel oneAPI command prompt for Intel 64 for Visual Studio 2022` App. -- In Run: - -In CMD: +- On the command prompt, enable the runtime environment with the following : ``` "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64 ``` -c. Check GPU +c. Verify installation -In oneAPI command line: +In the oneAPI command line, run the following to print the available SYCL devices : ``` sycl-ls ``` -There should be one or more level-zero devices. Please confirm that at least one GPU is present, like **[ext_oneapi_level_zero:gpu:0]**. +There should be one or more *level-zero* GPU devices displayed as **[ext_oneapi_level_zero:gpu]**. Below is example of such output detecting an *intel Iris Xe* GPU as a Level-zero SYCL device : Output (example): ``` @@ -349,7 +371,7 @@ Output (example): [ext_oneapi_level_zero:gpu:0] Intel(R) Level-Zero, Intel(R) Iris(R) Xe Graphics 1.3 [1.3.28044] ``` -4. Install cmake & make +4. Install build tools a. Download & install cmake for Windows: https://cmake.org/download/ @@ -359,76 +381,53 @@ b. Download & install mingw-w64 make for Windows provided by w64devkit - Extract `w64devkit` on your pc. -- Add the **bin** folder path in the Windows system PATH environment, like `C:\xxx\w64devkit\bin\`. +- Add the **bin** folder path in the Windows system PATH environment (for e.g. `C:\xxx\w64devkit\bin\`). -### Build locally: +### II. Build llama.cpp -In oneAPI command line window: +On the oneAPI command line window, step into the llama.cpp main directory and run the following : ``` mkdir -p build cd build @call "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64 --force -:: for FP16 -:: faster for long-prompt inference -:: cmake -G "MinGW Makefiles" .. -DLLAMA_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icx -DCMAKE_BUILD_TYPE=Release -DLLAMA_SYCL_F16=ON - -:: for FP32 -cmake -G "MinGW Makefiles" .. -DLLAMA_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icx -DCMAKE_BUILD_TYPE=Release +cmake -G "MinGW Makefiles" .. -DLLAMA_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icx -DCMAKE_BUILD_TYPE=Release -DLLAMA_SYCL_F16=ON - -:: build example/main only -:: make main - -:: build all binary -make -j -cd .. +make ``` -or - -``` +Otherwise, run the `win-build-sycl.bat` wrapper which encapsulates the former instructions : +```sh .\examples\sycl\win-build-sycl.bat ``` -Note: +*Notes :* -- By default, it will build for all binary files. It will take more time. To reduce the time, we recommend to build for **example/main** only. +- By default, calling `make` will build all target binary files. In case of a minimal experimental setup, the user can build the inference executable only through `make main`. -### Run +### III. Run the inference -1. Put model file to folder **models** +1. Retrieve and prepare model -You could download [llama-2-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-7B-GGUF/blob/main/llama-2-7b.Q4_0.gguf) as example. +You can refer to the general [*Prepare and Quantize*](README#prepare-and-quantize) guide for model prepration, or simply download [llama-2-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-7B-GGUF/blob/main/llama-2-7b.Q4_0.gguf) model as example. 2. Enable oneAPI running environment -- In Search, input 'oneAPI'. - -Search & open "Intel oneAPI command prompt for Intel 64 for Visual Studio 2022" - -- In Run: - -In CMD: +On the oneAPI command line window, run the following and step into the llama.cpp directory : ``` "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64 ``` -3. List device ID +3. List devices information -Run without parameter: +Similar to the native `sycl-ls`, available SYCL devices can be queried as follow : ``` build\bin\ls-sycl-device.exe - -or - -build\bin\main.exe ``` -Check the ID in startup log, like: - +The output of this command in a system with 1 *intel CPU* and 1 *intel GPU* would look like the following : ``` found 4 SYCL devices: Device 0: Intel(R) Arc(TM) A770 Graphics, compute capability 1.3, @@ -439,7 +438,6 @@ found 4 SYCL devices: max compute_units 24, max work group size 8192, max sub group size 64, global mem size 67065057280 Device 3: Intel(R) Arc(TM) A770 Graphics, compute capability 3.0, max compute_units 512, max work group size 1024, max sub group size 32, global mem size 16225243136 - ``` |Attribute|Note| @@ -447,15 +445,15 @@ found 4 SYCL devices: |compute capability 1.3|Level-zero running time, recommended | |compute capability 3.0|OpenCL running time, slower than level-zero in most cases| -4. Set device ID and execute llama.cpp +4. Launch inference -Set device ID = 0 by **set GGML_SYCL_DEVICE=0** +Set device ID=0 with `set GGML_SYCL_DEVICE=0` to target the Level-zero intel GPU and run the main : ``` set GGML_SYCL_DEVICE=0 build\bin\main.exe -m models\llama-2-7b.Q4_0.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 400 -e -ngl 33 -s 0 ``` -or run by script: +Otherwise, run the following wrapper script: ``` .\examples\sycl\win-run-llama2.bat @@ -463,29 +461,22 @@ or run by script: Note: -- By default, mmap is used to read model file. In some cases, it leads to the hang issue. Recommend to use parameter **--no-mmap** to disable mmap() to skip this issue. - +- By default, `mmap` is used to read model file. In some cases, it causes runtime hang issues. Please disable it by passing `--no-mmap` to the `main.exe` if faced with the issue. -5. Check the device ID in output - -Like: -``` -Using device **0** (Intel(R) Arc(TM) A770 Graphics) as main device -``` -## Environment Variable +## Environment Variables #### Build |Name|Value|Function| |-|-|-| -|LLAMA_SYCL|ON (mandatory)|Enable build with SYCL code path.
For FP32/FP16, LLAMA_SYCL=ON is mandatory.| -|LLAMA_SYCL_F16|ON (optional)|Enable FP16 build with SYCL code path. Faster for long-prompt inference.
For FP32, not set it.| -|CMAKE_C_COMPILER|icx|Use icx compiler for SYCL code path| -|CMAKE_CXX_COMPILER|icpx (Linux), icx (Windows)|use icpx/icx for SYCL code path| - -#### Running +|LLAMA_SYCL|ON (mandatory)|Enable build with SYCL code path.| +|LLAMA_SYCL_TARGET | INTEL *(default)* \| NVIDIA|Set the SYCL target device type.| +|LLAMA_SYCL_F16|OFF *(default)* \|ON *(optional)*|Enable FP16 build with SYCL code path.| +|CMAKE_C_COMPILER|icx|Set *icx* compiler for SYCL code path.| +|CMAKE_CXX_COMPILER|icpx *(Linux)*, icx *(Windows)*|Set `icpx/icx` compiler for SYCL code path.| +#### Runtime |Name|Value|Function| |-|-|-| @@ -493,49 +484,40 @@ Using device **0** (Intel(R) Arc(TM) A770 Graphics) as main device |GGML_SYCL_DEBUG|0 (default) or 1|Enable log function by macro: GGML_SYCL_DEBUG| |ZES_ENABLE_SYSMAN| 0 (default) or 1|Support to get free memory of GPU by sycl::aspect::ext_intel_free_memory.
Recommended to use when --split-mode = layer| -## Known Issue - -- Hang during startup +## Known Issues - llama.cpp use mmap as default way to read model file and copy to GPU. In some system, memcpy will be abnormal and block. +- Hanging during startup - Solution: add **--no-mmap** or **--mmap 0**. + llama.cpp uses *mmap* as the default mode for reading the model file and copying it to the GPU. In some systems, `memcpy` might behave abnormally and therefore hang. -- Split-mode: [row] is not supported + - **Solution** : add `--no-mmap` or `--mmap 0` flag to the `main` executable. - It's on developing. +- `Split-mode:[row]` is not supported. ## Q&A - Error: `error while loading shared libraries: libsycl.so.7: cannot open shared object file: No such file or directory`. - Miss to enable oneAPI running environment. - - Install oneAPI base toolkit and enable it by: `source /opt/intel/oneapi/setvars.sh`. + - Potential cause : Unavailable oneAPI installation or invisible ENV variables. + - Solution : Install *oneAPI base toolkit* and enable its ENV through: `source /opt/intel/oneapi/setvars.sh`. -- In Windows, no result, not error. +- General compiler error : - Miss to enable oneAPI running environment. + - Remove build folder or try a clean-build. -- Meet compile error. +- I can **not** see `[ext_oneapi_level_zero:gpu]` afer installing the GPU driver on Linux. - Remove folder **build** and try again. + Please double-check with `sudo sycl-ls`. -- I can **not** see **[ext_oneapi_level_zero:gpu:0]** afer install GPU driver in Linux. - - Please run **sudo sycl-ls**. - - If you see it in result, please add video/render group to your ID: + If it's present in the list, please add video/render group to your user then **logout/login** or restart your system : ``` - sudo usermod -aG render username - sudo usermod -aG video username + sudo usermod -aG render + sudo usermod -aG video ``` - Then **relogin**. - - If you do not see it, please check the installation GPU steps again. + Otherwise, please double-check the installation GPU steps. ## Todo -- Support multiple cards. +- Add support to multiple cards. From 5ba438a5e787d28e8bbd2572d0da4f5788a30600 Mon Sep 17 00:00:00 2001 From: Ouadie EL FAROUKI Date: Mon, 18 Mar 2024 14:35:31 +0000 Subject: [PATCH 02/16] Update README-sycl.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Alberto Cabrera Pérez --- README-sycl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README-sycl.md b/README-sycl.md index 9127796ad74ef..1ff3f31395f1e 100644 --- a/README-sycl.md +++ b/README-sycl.md @@ -26,7 +26,7 @@ ### Llama.cpp + SYCL To avoid re-inventing the wheel, this SYCL "backend" follows the same design found in other llama.cpp BLAS-based paths such as * OpenBLAS, cuBLAS, CLBlast etc..*. The oneAPI's [SYCLomatic](https://github.com/oneapi-src/SYCLomatic) open-source migration tool (Commercial release [Intel® DPC++ Compatibility Tool](https://www.intel.com/content/www/us/en/developer/tools/oneapi/dpc-compatibility-tool.html)) was used for this purpose. -The llama.cpp for SYCL is used to support: +The llama.cpp SYCL backend supports: - Intel GPUs. - Nvidia GPUs. From 5857f345a9d23d17fbba77f33e2fe36fcfaaa639 Mon Sep 17 00:00:00 2001 From: Ouadie EL FAROUKI Date: Mon, 18 Mar 2024 14:37:57 +0000 Subject: [PATCH 03/16] Update README-sycl.md 2 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Alberto Cabrera Pérez --- README-sycl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README-sycl.md b/README-sycl.md index 1ff3f31395f1e..f678166e492c4 100644 --- a/README-sycl.md +++ b/README-sycl.md @@ -32,7 +32,7 @@ The llama.cpp SYCL backend supports: *Upcoming support : AMD GPUs*. -For **Intel CPUs**, it is recommend to use llama.cpp for [x86](README.md#intel-onemkl) approach. +For **Intel CPUs**, it is recommended to the use llama.cpp for [x86](README.md#intel-onemkl) approach. ## News From 04302726a27aa1d0e3c7bc00672a1109c202d066 Mon Sep 17 00:00:00 2001 From: Ouadie EL FAROUKI Date: Mon, 18 Mar 2024 14:38:07 +0000 Subject: [PATCH 04/16] Update README-sycl.md 3 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Alberto Cabrera Pérez --- README-sycl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README-sycl.md b/README-sycl.md index f678166e492c4..92b7ee119b46f 100644 --- a/README-sycl.md +++ b/README-sycl.md @@ -59,7 +59,7 @@ For **Intel CPUs**, it is recommended to the use llama.cpp for [x86](README.md#i ## Supported devices -### intel GPUs +### Intel GPUs The BLAS acceleration oneAPI Math Kernel Library which comes with the oneAPI base-toolkit natively supports intel GPUs. In order to make it "visible" while building/running llama.cpp, simply run the following : ```sh From 1d51a6f0091c5bc35628387e37ad2a4b37644e2a Mon Sep 17 00:00:00 2001 From: Ouadie EL FAROUKI Date: Mon, 18 Mar 2024 14:38:29 +0000 Subject: [PATCH 05/16] Update README-sycl.md 4 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Alberto Cabrera Pérez --- README-sycl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README-sycl.md b/README-sycl.md index 92b7ee119b46f..6ff7f012e1856 100644 --- a/README-sycl.md +++ b/README-sycl.md @@ -61,7 +61,7 @@ For **Intel CPUs**, it is recommended to the use llama.cpp for [x86](README.md#i ### Intel GPUs -The BLAS acceleration oneAPI Math Kernel Library which comes with the oneAPI base-toolkit natively supports intel GPUs. In order to make it "visible" while building/running llama.cpp, simply run the following : +The BLAS acceleration oneAPI Math Kernel Library which comes with the oneAPI base-toolkit natively supports intel GPUs. In order to make it "visible" while building/running llama.cpp, simply run the following: ```sh source /opt/intel/oneapi/setvars.sh ``` From 0fc1a04b1af77a3ce23f3e074341794aefdd3ae5 Mon Sep 17 00:00:00 2001 From: Ouadie EL FAROUKI Date: Mon, 18 Mar 2024 14:38:46 +0000 Subject: [PATCH 06/16] Update README-sycl.md 5 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Alberto Cabrera Pérez --- README-sycl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README-sycl.md b/README-sycl.md index 6ff7f012e1856..05e730e293781 100644 --- a/README-sycl.md +++ b/README-sycl.md @@ -461,7 +461,7 @@ Otherwise, run the following wrapper script: Note: -- By default, `mmap` is used to read model file. In some cases, it causes runtime hang issues. Please disable it by passing `--no-mmap` to the `main.exe` if faced with the issue. +- By default, `mmap` is used to read the model file. In some cases, it causes runtime hang issues. Please disable it by passing `--no-mmap` to the `main.exe` if faced with the issue. ## Environment Variables From 042f5769bc24b8593ef190c1342f18dd29d75dba Mon Sep 17 00:00:00 2001 From: Ouadie EL FAROUKI Date: Mon, 18 Mar 2024 14:40:34 +0000 Subject: [PATCH 07/16] Update README-sycl.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Alberto Cabrera Pérez --- README-sycl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README-sycl.md b/README-sycl.md index 05e730e293781..64b5da168b453 100644 --- a/README-sycl.md +++ b/README-sycl.md @@ -501,7 +501,7 @@ Note: - Potential cause : Unavailable oneAPI installation or invisible ENV variables. - Solution : Install *oneAPI base toolkit* and enable its ENV through: `source /opt/intel/oneapi/setvars.sh`. -- General compiler error : +- General compiler error: - Remove build folder or try a clean-build. From d046fefa7530fa25b0276f54ef9733453609f533 Mon Sep 17 00:00:00 2001 From: Ouadie EL FAROUKI Date: Mon, 18 Mar 2024 14:44:14 +0000 Subject: [PATCH 08/16] Update README-sycl.md 6 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Alberto Cabrera Pérez --- README-sycl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README-sycl.md b/README-sycl.md index 64b5da168b453..9073d34a461d4 100644 --- a/README-sycl.md +++ b/README-sycl.md @@ -509,7 +509,7 @@ Note: Please double-check with `sudo sycl-ls`. - If it's present in the list, please add video/render group to your user then **logout/login** or restart your system : + If it's present in the list, please add video/render group to your user then **logout/login** or restart your system: ``` sudo usermod -aG render From 42b34b704f72b531be40e3f7fe700fd565ed7ca2 Mon Sep 17 00:00:00 2001 From: Ouadie EL FAROUKI Date: Mon, 18 Mar 2024 14:44:33 +0000 Subject: [PATCH 09/16] Update README-sycl.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Alberto Cabrera Pérez --- README-sycl.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/README-sycl.md b/README-sycl.md index 9073d34a461d4..cd3a9c3509b4e 100644 --- a/README-sycl.md +++ b/README-sycl.md @@ -18,10 +18,10 @@ **oneAPI** is an open ecosystem and a standard-based specification, supporting multiple architectures including but not limited to intel CPUs, GPUs and FPGAs. The key components of the oneAPI ecosystem include : -- **DPCPP** *(Data Parallel C++)* : The primary oneAPI SYCL implementation, which includes the icpx/icx Compilers. -- **oneAPI Libraries** : A set of highly optimized libraries targeting multiple domains *(e.g. oneMKL - Math Kernel Library)*. -- **oneAPI LevelZero** : A high performance low level interface for fine-grained control over intel iGPUs and dGPUs. -- **Nvidia & AMD Plugins** : These are plugins extending oneAPI's DPCPP support to SYCL on Nvidia and AMD GPU targets. +- **DPCPP** *(Data Parallel C++)*: The primary oneAPI SYCL implementation, which includes the icpx/icx Compilers. +- **oneAPI Libraries**: A set of highly optimized libraries targeting multiple domains *(e.g. oneMKL - Math Kernel Library)*. +- **oneAPI LevelZero**: A high performance low level interface for fine-grained control over intel iGPUs and dGPUs. +- **Nvidia & AMD Plugins**: These are plugins extending oneAPI's DPCPP support to SYCL on Nvidia and AMD GPU targets. ### Llama.cpp + SYCL To avoid re-inventing the wheel, this SYCL "backend" follows the same design found in other llama.cpp BLAS-based paths such as * OpenBLAS, cuBLAS, CLBlast etc..*. The oneAPI's [SYCLomatic](https://github.com/oneapi-src/SYCLomatic) open-source migration tool (Commercial release [Intel® DPC++ Compatibility Tool](https://www.intel.com/content/www/us/en/developer/tools/oneapi/dpc-compatibility-tool.html)) was used for this purpose. From 3a4adb69aa0d74778c0c2f92fe25c47dc3412917 Mon Sep 17 00:00:00 2001 From: Ouadie EL FAROUKI Date: Mon, 18 Mar 2024 14:44:58 +0000 Subject: [PATCH 10/16] Update README-sycl.md Co-authored-by: AidanBeltonS <87009434+AidanBeltonS@users.noreply.github.com> --- README-sycl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README-sycl.md b/README-sycl.md index cd3a9c3509b4e..7879923e5aba3 100644 --- a/README-sycl.md +++ b/README-sycl.md @@ -197,7 +197,7 @@ Following guidelines/code snippets assume the default installation values. Other Upon a successful installation, SYCL is enabled for the available intel devices, along with relevant libraries such as oneAPI MKL for intel GPUs. -- **Bringing support to Nvidia GPUs** +- **Adding support to Nvidia GPUs** **oneAPI** : In order to enable SYCL support on Nvidia GPUs through oneAPI, please install the [Codeplay oneAPI Plugin for Nvidia GPUs](https://developer.codeplay.com/products/oneapi/nvidia/download). User should also make sure the plugin version matches the installed base toolkit one *(previous step)* for a seamless "oneAPI on Nvidia GPU" setup. From 99e80a14597edc7efc57c4fa7edd5bb3dcad8022 Mon Sep 17 00:00:00 2001 From: Ouadie EL FAROUKI Date: Mon, 18 Mar 2024 14:46:22 +0000 Subject: [PATCH 11/16] Update README-sycl.md Co-authored-by: AidanBeltonS <87009434+AidanBeltonS@users.noreply.github.com> --- README-sycl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README-sycl.md b/README-sycl.md index 7879923e5aba3..13aa2ce7e5776 100644 --- a/README-sycl.md +++ b/README-sycl.md @@ -24,7 +24,7 @@ - **Nvidia & AMD Plugins**: These are plugins extending oneAPI's DPCPP support to SYCL on Nvidia and AMD GPU targets. ### Llama.cpp + SYCL -To avoid re-inventing the wheel, this SYCL "backend" follows the same design found in other llama.cpp BLAS-based paths such as * OpenBLAS, cuBLAS, CLBlast etc..*. The oneAPI's [SYCLomatic](https://github.com/oneapi-src/SYCLomatic) open-source migration tool (Commercial release [Intel® DPC++ Compatibility Tool](https://www.intel.com/content/www/us/en/developer/tools/oneapi/dpc-compatibility-tool.html)) was used for this purpose. +This SYCL "backend" follows the same design found in other llama.cpp BLAS-based paths such as * OpenBLAS, cuBLAS, CLBlast etc..*. The oneAPI's [SYCLomatic](https://github.com/oneapi-src/SYCLomatic) open-source migration tool (Commercial release [Intel® DPC++ Compatibility Tool](https://www.intel.com/content/www/us/en/developer/tools/oneapi/dpc-compatibility-tool.html)) was used for this purpose. The llama.cpp SYCL backend supports: - Intel GPUs. From 8d7200c7a3c59595261bcd132ad119ffce895bf2 Mon Sep 17 00:00:00 2001 From: Ouadie EL FAROUKI Date: Mon, 18 Mar 2024 14:47:14 +0000 Subject: [PATCH 12/16] Update README-sycl.md Co-authored-by: AidanBeltonS <87009434+AidanBeltonS@users.noreply.github.com> --- README-sycl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README-sycl.md b/README-sycl.md index 13aa2ce7e5776..3996063cf6c96 100644 --- a/README-sycl.md +++ b/README-sycl.md @@ -199,7 +199,7 @@ Upon a successful installation, SYCL is enabled for the available intel devices, - **Adding support to Nvidia GPUs** -**oneAPI** : In order to enable SYCL support on Nvidia GPUs through oneAPI, please install the [Codeplay oneAPI Plugin for Nvidia GPUs](https://developer.codeplay.com/products/oneapi/nvidia/download). User should also make sure the plugin version matches the installed base toolkit one *(previous step)* for a seamless "oneAPI on Nvidia GPU" setup. +**oneAPI** : In order to enable SYCL support on Nvidia GPUs, please install the [Codeplay oneAPI Plugin for Nvidia GPUs](https://developer.codeplay.com/products/oneapi/nvidia/download). User should also make sure the plugin version matches the installed base toolkit one *(previous step)* for a seamless "oneAPI on Nvidia GPU" setup. **oneMKL** : The current oneMKL releases *(shipped with the oneAPI base-toolkit)* does not contain the cuBLAS backend. A build from source of the upstream [oneMKL](https://github.com/oneapi-src/oneMKL) with the *cuBLAS* backend enabled is thus required to run it on Nvidia GPUs. From 799dd4a2408704d9dc80558640d6938871b47a32 Mon Sep 17 00:00:00 2001 From: Ouadie EL FAROUKI Date: Mon, 18 Mar 2024 14:52:07 +0000 Subject: [PATCH 13/16] Update README-sycl.md Co-authored-by: AidanBeltonS <87009434+AidanBeltonS@users.noreply.github.com> --- README-sycl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README-sycl.md b/README-sycl.md index 3996063cf6c96..f088d935c191c 100644 --- a/README-sycl.md +++ b/README-sycl.md @@ -121,7 +121,7 @@ docker run -it --rm -v "$(pwd):/app:Z" --device /dev/dri/renderD128:/dev/dri/ren ``` *Notes :* -- Docker have been tested succefully on native Linux. WSL support has not been verified yet. +- Docker has been tested successfully on native Linux. WSL support has not been verified yet. - You may need to install Intel GPU driver on the **host** machine *(Please refer to the [Linux configuration](#linux) for details)*. ## Linux From 4919efab0b1ee8e64412745ffdcc63dd74494287 Mon Sep 17 00:00:00 2001 From: Ouadie EL FAROUKI Date: Mon, 18 Mar 2024 15:05:04 +0000 Subject: [PATCH 14/16] Update README-sycl.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Alberto Cabrera Pérez --- README-sycl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README-sycl.md b/README-sycl.md index f088d935c191c..b77d7c6f40176 100644 --- a/README-sycl.md +++ b/README-sycl.md @@ -120,7 +120,7 @@ ls -la /dev/dri docker run -it --rm -v "$(pwd):/app:Z" --device /dev/dri/renderD128:/dev/dri/renderD128 --device /dev/dri/card1:/dev/dri/card1 llama-cpp-sycl -m "/app/models/YOUR_MODEL_FILE" -p "Building a website can be done in 10 simple steps:" -n 400 -e -ngl 33 ``` -*Notes :* +*Notes:* - Docker has been tested successfully on native Linux. WSL support has not been verified yet. - You may need to install Intel GPU driver on the **host** machine *(Please refer to the [Linux configuration](#linux) for details)*. From 3b26118b83e4321302c1f5953f15f63ad3b47839 Mon Sep 17 00:00:00 2001 From: Ouadie EL FAROUKI Date: Mon, 18 Mar 2024 15:08:28 +0000 Subject: [PATCH 15/16] Apply suggestions from code review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Alberto Cabrera Pérez --- README-sycl.md | 36 ++++++++++++++++++------------------ 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/README-sycl.md b/README-sycl.md index b77d7c6f40176..ca0f18cf899fe 100644 --- a/README-sycl.md +++ b/README-sycl.md @@ -132,18 +132,18 @@ docker run -it --rm -v "$(pwd):/app:Z" --device /dev/dri/renderD128:/dev/dri/ren - **Intel GPU** -Intel data center GPUs drivers installation guide and download page can be found here : [Get intel dGPU Drivers](https://dgpu-docs.intel.com/driver/installation.html#ubuntu-install-steps). +Intel data center GPUs drivers installation guide and download page can be found here: [Get intel dGPU Drivers](https://dgpu-docs.intel.com/driver/installation.html#ubuntu-install-steps). *Note* : for client GPUs *(iGPU & Arc A-Series)*, please refer to the [client iGPU driver installation](https://dgpu-docs.intel.com/driver/client/overview.html). -Once installed, please add user(s) to group: `video`, `render`. +Once installed, add the user(s) to the `video` and `render` groups. ```sh sudo usermod -aG render sudo usermod -aG video ``` -*Note* : logout/re-login for the changes to take effect. +*Note*: logout/re-login for the changes to take effect. Verify installation through `clinfo`: @@ -165,11 +165,11 @@ Platform #0: Intel(R) OpenCL HD Graphics - **Nvidia GPU** In order to target Nvidia GPUs through SYCL, please make sure the CUDA/CUBLAS native requirements *-found [here](README.md#cublas)-* are installed. -Installation can be verified by running the following : +Installation can be verified by running the following: ```sh nvidia-smi ``` -Please make sure at least one CUDA device is available, which can be displayed like this *(here an A100-40GB Nvidia GPU)* : +Please make sure at least one CUDA device is available, which can be displayed like this *(here an A100-40GB Nvidia GPU)*: ``` +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.54.03 Driver Version: 535.54.03 CUDA Version: 12.2 | @@ -189,7 +189,7 @@ Please make sure at least one CUDA device is available, which can be displayed l - **Base installation** -The base toolkit can be obtained from the official [Intel® oneAPI Base Toolkit ](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html) page. +The base toolkit can be obtained from the official [Intel® oneAPI Base Toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html) page. Please follow the instructions for downloading and installing the Toolkit for Linux, and preferably keep the default installation values unchanged, notably the installation path *(`/opt/intel/oneapi` by default)*. @@ -202,7 +202,7 @@ Upon a successful installation, SYCL is enabled for the available intel devices, **oneAPI** : In order to enable SYCL support on Nvidia GPUs, please install the [Codeplay oneAPI Plugin for Nvidia GPUs](https://developer.codeplay.com/products/oneapi/nvidia/download). User should also make sure the plugin version matches the installed base toolkit one *(previous step)* for a seamless "oneAPI on Nvidia GPU" setup. -**oneMKL** : The current oneMKL releases *(shipped with the oneAPI base-toolkit)* does not contain the cuBLAS backend. A build from source of the upstream [oneMKL](https://github.com/oneapi-src/oneMKL) with the *cuBLAS* backend enabled is thus required to run it on Nvidia GPUs. +**oneMKL** : The current oneMKL releases *(shipped with the oneAPI base-toolkit)* do not contain the cuBLAS backend. A build from source of the upstream [oneMKL](https://github.com/oneapi-src/oneMKL) with the *cuBLAS* backend enabled is thus required to run it on Nvidia GPUs. ```sh git clone https://github.com/oneapi-src/oneMKL @@ -322,7 +322,7 @@ Otherwise, you can run the script : *Notes :* -- By default, `mmap` is used to read model file. In some cases, it causes runtime hang issues. Please disable it by passing `--no-mmap` to the `/bin/main` if faced with the issue. +- By default, `mmap` is used to read the model file. In some cases, it causes runtime hang issues. Please disable it by passing `--no-mmap` to the `/bin/main` if faced with the issue. ## Windows @@ -334,11 +334,11 @@ Intel GPU drivers instructions guide and download page can be found here : [Get 2. Install Visual Studio -If you already have a recent version of Microsoft Visual Studio, you can skip this tep. Otherwise, please refer to the official download page for [Microsoft Visual Studio](https://visualstudio.microsoft.com/). +If you already have a recent version of Microsoft Visual Studio, you can skip this step. Otherwise, please refer to the official download page for [Microsoft Visual Studio](https://visualstudio.microsoft.com/). 3. Install Intel® oneAPI Base toolkit -The base toolkit can be obtained from the official [Intel® oneAPI Base Toolkit ](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html) page. +The base toolkit can be obtained from the official [Intel® oneAPI Base Toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html) page. Please follow the instructions for downloading and installing the Toolkit for Windows, and preferably keep the default installation values unchanged, notably the installation path *(`C:\Program Files (x86)\Intel\oneAPI` by default)*. @@ -348,20 +348,20 @@ b. Enable oneAPI running environment: - Type "oneAPI" in the search bar, then open the `Intel oneAPI command prompt for Intel 64 for Visual Studio 2022` App. -- On the command prompt, enable the runtime environment with the following : +- On the command prompt, enable the runtime environment with the following: ``` "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64 ``` c. Verify installation -In the oneAPI command line, run the following to print the available SYCL devices : +In the oneAPI command line, run the following to print the available SYCL devices: ``` sycl-ls ``` -There should be one or more *level-zero* GPU devices displayed as **[ext_oneapi_level_zero:gpu]**. Below is example of such output detecting an *intel Iris Xe* GPU as a Level-zero SYCL device : +There should be one or more *level-zero* GPU devices displayed as **[ext_oneapi_level_zero:gpu]**. Below is example of such output detecting an *intel Iris Xe* GPU as a Level-zero SYCL device: Output (example): ``` @@ -385,7 +385,7 @@ b. Download & install mingw-w64 make for Windows provided by w64devkit ### II. Build llama.cpp -On the oneAPI command line window, step into the llama.cpp main directory and run the following : +On the oneAPI command line window, step into the llama.cpp main directory and run the following: ``` mkdir -p build @@ -414,20 +414,20 @@ You can refer to the general [*Prepare and Quantize*](README#prepare-and-quantiz 2. Enable oneAPI running environment -On the oneAPI command line window, run the following and step into the llama.cpp directory : +On the oneAPI command line window, run the following and step into the llama.cpp directory: ``` "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64 ``` 3. List devices information -Similar to the native `sycl-ls`, available SYCL devices can be queried as follow : +Similar to the native `sycl-ls`, available SYCL devices can be queried as follow: ``` build\bin\ls-sycl-device.exe ``` -The output of this command in a system with 1 *intel CPU* and 1 *intel GPU* would look like the following : +The output of this command in a system with 1 *intel CPU* and 1 *intel GPU* would look like the following: ``` found 4 SYCL devices: Device 0: Intel(R) Arc(TM) A770 Graphics, compute capability 1.3, @@ -447,7 +447,7 @@ found 4 SYCL devices: 4. Launch inference -Set device ID=0 with `set GGML_SYCL_DEVICE=0` to target the Level-zero intel GPU and run the main : +Set device ID=0 with `set GGML_SYCL_DEVICE=0` to target the Level-zero intel GPU and run the main: ``` set GGML_SYCL_DEVICE=0 From d8ee559e1b4a84c74cba14cfafdbc431bd827085 Mon Sep 17 00:00:00 2001 From: OuadiElfarouki Date: Mon, 18 Mar 2024 15:14:11 +0000 Subject: [PATCH 16/16] additional NIT fixes --- README-sycl.md | 50 ++++++++++++++++++++++++-------------------------- 1 file changed, 24 insertions(+), 26 deletions(-) diff --git a/README-sycl.md b/README-sycl.md index ca0f18cf899fe..42f40103f59e0 100644 --- a/README-sycl.md +++ b/README-sycl.md @@ -16,7 +16,7 @@ **SYCL** is a high-level parallel programming model designed to improve developers productivity writing code across various hardware accelerators such as CPUs, GPUs, and FPGAs. It is a single-source language designed for heterogeneous computing and based on standard C++17. -**oneAPI** is an open ecosystem and a standard-based specification, supporting multiple architectures including but not limited to intel CPUs, GPUs and FPGAs. The key components of the oneAPI ecosystem include : +**oneAPI** is an open ecosystem and a standard-based specification, supporting multiple architectures including but not limited to intel CPUs, GPUs and FPGAs. The key components of the oneAPI ecosystem include: - **DPCPP** *(Data Parallel C++)*: The primary oneAPI SYCL implementation, which includes the icpx/icx Compilers. - **oneAPI Libraries**: A set of highly optimized libraries targeting multiple domains *(e.g. oneMKL - Math Kernel Library)*. @@ -30,9 +30,9 @@ The llama.cpp SYCL backend supports: - Intel GPUs. - Nvidia GPUs. -*Upcoming support : AMD GPUs*. +*Upcoming support: AMD GPUs*. -For **Intel CPUs**, it is recommended to the use llama.cpp for [x86](README.md#intel-onemkl) approach. +When targetting **Intel CPUs**, it is recommended to use llama.cpp for [x86](README.md#intel-onemkl) approach. ## News @@ -61,7 +61,7 @@ For **Intel CPUs**, it is recommended to the use llama.cpp for [x86](README.md#i ### Intel GPUs -The BLAS acceleration oneAPI Math Kernel Library which comes with the oneAPI base-toolkit natively supports intel GPUs. In order to make it "visible" while building/running llama.cpp, simply run the following: +The oneAPI Math Kernel Library, which the oneAPI base-toolkit includes, supports intel GPUs. In order to make it "visible", simply run the following: ```sh source /opt/intel/oneapi/setvars.sh ``` @@ -76,9 +76,9 @@ source /opt/intel/oneapi/setvars.sh |Intel built-in Arc GPU| Support| built-in Arc GPU in Meteor Lake| |Intel iGPU| Support| iGPU in i5-1250P, i7-1260P, i7-1165G7| -*Notes :* +*Notes:* -- Device memory can be a limitation when running a large model on an intel GPU. The loaded model size, *`llm_load_tensors : buffer_size`*, is displayed in the log when running `./bin/main` +- Device memory can be a limitation when running a large model on an intel GPU. The loaded model size, *`llm_load_tensors: buffer_size`*, is displayed in the log when running `./bin/main` - Please make sure the GPU shared memory from the host is large enough to account for the model's size. For e.g. the *llama-2-7b.Q4_0* requires at least 8.0GB for integrated GPUs and 4.0GB for discrete GPUs. @@ -87,8 +87,6 @@ source /opt/intel/oneapi/setvars.sh ### Nvidia GPUs The BLAS acceleration on Nvidia GPUs through oneAPI can be obtained using the Nvidia plugins for oneAPI and the cuBLAS backend of the upstream oneMKL library. Details and instructions on how to setup the runtime and library can be found in [this section](#i-setup-environment) -Math Kernel Library which comes with the oneAPI base-toolkit natively supports intel GPUs. In order to make it "visible" while building/running llama.cpp, simply run the following : - - **Tested devices** |Nvidia GPU| Status | Verified Model| @@ -96,7 +94,7 @@ Math Kernel Library which comes with the oneAPI base-toolkit natively supports i |Ampere Series| Support| A100, A4000| |Ampere Series *(Mobile)*| Support| RTX 40 Series -*Notes :* +*Notes:* - Support for Nvidia targets through oneAPI is currently limited to Linux platforms. - Please make sure the native oneAPI MKL *(dedicated to intel CPUs and GPUs)* is not "visible" at this stage to properly setup and use the built-from-source oneMKL with cuBLAS backend in llama.cpp for Nvidia GPUs. @@ -109,7 +107,7 @@ The docker build option is currently limited to *intel GPU* targets. docker build -t llama-cpp-sycl --build-arg="LLAMA_SYCL_F16=[OFF|ON]" -f .devops/main-intel.Dockerfile . ``` -*Note* : you can also use the `.devops/server-intel.Dockerfile`, which builds the *"server"* alternative. +*Note*: you can also use the `.devops/server-intel.Dockerfile`, which builds the *"server"* alternative. ### Run container @@ -134,7 +132,7 @@ docker run -it --rm -v "$(pwd):/app:Z" --device /dev/dri/renderD128:/dev/dri/ren Intel data center GPUs drivers installation guide and download page can be found here: [Get intel dGPU Drivers](https://dgpu-docs.intel.com/driver/installation.html#ubuntu-install-steps). -*Note* : for client GPUs *(iGPU & Arc A-Series)*, please refer to the [client iGPU driver installation](https://dgpu-docs.intel.com/driver/client/overview.html). +*Note*: for client GPUs *(iGPU & Arc A-Series)*, please refer to the [client iGPU driver installation](https://dgpu-docs.intel.com/driver/client/overview.html). Once installed, add the user(s) to the `video` and `render` groups. @@ -199,10 +197,10 @@ Upon a successful installation, SYCL is enabled for the available intel devices, - **Adding support to Nvidia GPUs** -**oneAPI** : In order to enable SYCL support on Nvidia GPUs, please install the [Codeplay oneAPI Plugin for Nvidia GPUs](https://developer.codeplay.com/products/oneapi/nvidia/download). User should also make sure the plugin version matches the installed base toolkit one *(previous step)* for a seamless "oneAPI on Nvidia GPU" setup. +**oneAPI**: In order to enable SYCL support on Nvidia GPUs, please install the [Codeplay oneAPI Plugin for Nvidia GPUs](https://developer.codeplay.com/products/oneapi/nvidia/download). User should also make sure the plugin version matches the installed base toolkit one *(previous step)* for a seamless "oneAPI on Nvidia GPU" setup. -**oneMKL** : The current oneMKL releases *(shipped with the oneAPI base-toolkit)* do not contain the cuBLAS backend. A build from source of the upstream [oneMKL](https://github.com/oneapi-src/oneMKL) with the *cuBLAS* backend enabled is thus required to run it on Nvidia GPUs. +**oneMKL**: The current oneMKL releases *(shipped with the oneAPI base-toolkit)* do not contain the cuBLAS backend. A build from source of the upstream [oneMKL](https://github.com/oneapi-src/oneMKL) with the *cuBLAS* backend enabled is thus required to run it on Nvidia GPUs. ```sh git clone https://github.com/oneapi-src/oneMKL @@ -223,7 +221,7 @@ sycl-ls - **Intel GPU** -When targeting an intel GPU, the user should expect one or more level-zero devices among the available SYCL devices. Please make sure that at least one GPU is present, for instance [`ext_oneapi_level_zero:gpu:0`] in the sample output below : +When targeting an intel GPU, the user should expect one or more level-zero devices among the available SYCL devices. Please make sure that at least one GPU is present, for instance [`ext_oneapi_level_zero:gpu:0`] in the sample output below: ``` [opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2 [2023.16.10.0.17_160000] @@ -234,7 +232,7 @@ When targeting an intel GPU, the user should expect one or more level-zero devic - **Nvidia GPU** -Similarly, user targetting Nvidia GPUs should expect at least one SYCL-CUDA device [`ext_oneapi_cuda:gpu`] as bellow : +Similarly, user targetting Nvidia GPUs should expect at least one SYCL-CUDA device [`ext_oneapi_cuda:gpu`] as bellow: ``` [opencl:acc:0] Intel(R) FPGA Emulation Platform for OpenCL(TM), Intel(R) FPGA Emulation Device OpenCL 1.2 [2023.16.12.0.12_195853.xmain-hotfix] [opencl:cpu:1] Intel(R) OpenCL, Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz OpenCL 3.0 (Build 0) [2023.16.12.0.12_195853.xmain-hotfix] @@ -266,7 +264,7 @@ mkdir -p build && cd build cmake .. -DLLAMA_SYCL=ON -DLLAMA_SYCL_TARGET=NVIDIA -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx ``` -*Notes :* +*Notes:* - The **F32** build is enabled by default, but the **F16** yields better performance for long-prompt inference. ### III. Run the inference @@ -283,12 +281,12 @@ source /opt/intel/oneapi/setvars.sh 3. List devices information -Similar to the native `sycl-ls`, available SYCL devices can be queried as follow : +Similar to the native `sycl-ls`, available SYCL devices can be queried as follow: ```sh ./build/bin/ls-sycl-device ``` -A example of such log in a system with 1 *intel CPU* and 1 *intel GPU* can look like the following : +A example of such log in a system with 1 *intel CPU* and 1 *intel GPU* can look like the following: ``` found 4 SYCL devices: Device 0: Intel(R) Arc(TM) A770 Graphics, compute capability 1.3, @@ -314,13 +312,13 @@ For instance, in order to target the SYCL device with *ID*=0 *(log from previous GGML_SYCL_DEVICE=0 ./build/bin/main -m models/llama-2-7b.Q4_0.gguf -p "Building a website can be done in 10 simple steps:" -n 400 -e -ngl 33 ``` -Otherwise, you can run the script : +Otherwise, you can run the script: ```sh ./examples/sycl/run_llama2.sh ``` -*Notes :* +*Notes:* - By default, `mmap` is used to read the model file. In some cases, it causes runtime hang issues. Please disable it by passing `--no-mmap` to the `/bin/main` if faced with the issue. @@ -330,7 +328,7 @@ Otherwise, you can run the script : 1. Install GPU driver -Intel GPU drivers instructions guide and download page can be found here : [Get intel GPU Drivers](https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/arc/software/drivers.html). +Intel GPU drivers instructions guide and download page can be found here: [Get intel GPU Drivers](https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/arc/software/drivers.html). 2. Install Visual Studio @@ -397,12 +395,12 @@ cmake -G "MinGW Makefiles" .. -DLLAMA_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CX make ``` -Otherwise, run the `win-build-sycl.bat` wrapper which encapsulates the former instructions : +Otherwise, run the `win-build-sycl.bat` wrapper which encapsulates the former instructions: ```sh .\examples\sycl\win-build-sycl.bat ``` -*Notes :* +*Notes:* - By default, calling `make` will build all target binary files. In case of a minimal experimental setup, the user can build the inference executable only through `make main`. @@ -490,7 +488,7 @@ Note: llama.cpp uses *mmap* as the default mode for reading the model file and copying it to the GPU. In some systems, `memcpy` might behave abnormally and therefore hang. - - **Solution** : add `--no-mmap` or `--mmap 0` flag to the `main` executable. + - **Solution**: add `--no-mmap` or `--mmap 0` flag to the `main` executable. - `Split-mode:[row]` is not supported. @@ -498,8 +496,8 @@ Note: - Error: `error while loading shared libraries: libsycl.so.7: cannot open shared object file: No such file or directory`. - - Potential cause : Unavailable oneAPI installation or invisible ENV variables. - - Solution : Install *oneAPI base toolkit* and enable its ENV through: `source /opt/intel/oneapi/setvars.sh`. + - Potential cause: Unavailable oneAPI installation or not set ENV variables. + - Solution: Install *oneAPI base toolkit* and enable its ENV through: `source /opt/intel/oneapi/setvars.sh`. - General compiler error: