Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Fabric Manager Support #3873

Merged
merged 1 commit into from
Apr 13, 2024

Conversation

monirul
Copy link
Contributor

@monirul monirul commented Apr 5, 2024

Issue number: #3278
Closes # #3278

Description of changes:
Bottlerocket currently lacks support for Fabric Manager, which is necessary for utilizing GPUs in p4 and p5 instance types. This pull request introduces support for the Fabric Manager, enhancing Bottlerocket's capabilities to manage GPU resources efficiently and enables customer to use Bottlerocket as container host OS in p4 and p5 instances.
The fabric manager support is added in kernel kmod-5.15 and kmod-6.1. As a result, k8s-1.24+ will have the changes. However, the kmod-5.10 kernel module utilizes the 470 legacy driver, which lacks compatibility with the latest GPUs found in p4 and p5 instances. Therefore, the Fabric Manager updates have not been applied to kmod-5.10, and as such, k8s-1.23 variant will not include Fabric Manager capabilities.

The change is based of bcressey@75eab67 branch.

Testing done:
Testing Summary:

  • K8S-1.29 (kernel-6.1) & K8S-1.27 (kernel-5.15):
    • Instances P5, P3, G5, and G5dn operated as expected.
    • P2 instance encountered failure because the Nvidia 535 driver lacks support for the NVIDIA Tesla K80 GPU. Irrespective of the fabric manager changes, P2 instances will not function for K8S-1.24 to K8S-1.29 variants as it uses 535 driver.
# K8S version Instance Types Successfully Booted? Run Nvidia Smoke Test Comments
1 1.29 p5 Yes Success -- 
2 1.29 p4 Not Tested N/A Not able to test due to resource contraint
3 1.29 p3 Yes Success Generates a warning log during boot time request to query NVSwitch device information from NVSwitch driver failed with error:WARNING Nothing to do [NV_WARN_NOTHING_TO_DO]
4 1.29 p2 No N/A Error: could not insert 'nvidia_modeset': No such device
Reason: NVIDIA Tesla K80 supported through the NVIDIA 470.xx Legacy drivers. This bottlerocket variant uses 535 driver.
5 1.29 g5 Yes Success Generates a warning log during boot time request to query NVSwitch device information from NVSwitch driver failed with error:WARNING Nothing to do [NV_WARN_NOTHING_TO_DO]
6 1.29 g4dn Yes Success Generates a warning log during boot time request to query NVSwitch device information from NVSwitch driver failed with error:WARNING Nothing to do [NV_WARN_NOTHING_TO_DO]
7 1.27 p5 Yes Success  --
9 1.27 p4 Not Tested N/A Not able to test due to resource contraint
10 1.27 p3 Yes Success  Generates a warning log during boot time request to query NVSwitch device information from NVSwitch driver failed with error:WARNING Nothing to do [NV_WARN_NOTHING_TO_DO]
11 1.27 p2 No N/A Error: could not insert 'nvidia_modeset': No such device
Reason: NVIDIA Tesla K80 supported through the NVIDIA 470.xx Legacy drivers. This bottlerocket variant uses 535 driver.
12 1.27 g5 Yes Success  Generates a warning log during boot time request to query NVSwitch device information from NVSwitch driver failed with error:WARNING Nothing to do [NV_WARN_NOTHING_TO_DO]
13 1.27 g4dn Yes Success Generates a warning log during boot time request to query NVSwitch device information from NVSwitch driver failed with error:WARNING Nothing to do [NV_WARN_NOTHING_TO_DO]

Terms of contribution:

By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

@monirul monirul changed the title Add Fabric Manager Support to Bottlerocket Add Fabric Manager Support Apr 8, 2024
packages/kmod-6.1-nvidia/nvidia-tmpfiles.conf.in Outdated Show resolved Hide resolved
packages/kmod-6.1-nvidia/nvidia-fabricmanager.service Outdated Show resolved Hide resolved
packages/kmod-6.1-nvidia/nvidia-fabricmanager.service Outdated Show resolved Hide resolved
packages/kmod-6.1-nvidia/nvidia-fabricmanager.cfg Outdated Show resolved Hide resolved
packages/kmod-6.1-nvidia/kmod-6.1-nvidia.spec Outdated Show resolved Hide resolved
packages/kmod-6.1-nvidia/kmod-6.1-nvidia.spec Outdated Show resolved Hide resolved
packages/kmod-6.1-nvidia/kmod-6.1-nvidia.spec Outdated Show resolved Hide resolved
packages/kmod-6.1-nvidia/kmod-6.1-nvidia.spec Outdated Show resolved Hide resolved
packages/kmod-6.1-nvidia/kmod-6.1-nvidia.spec Outdated Show resolved Hide resolved
packages/kmod-6.1-nvidia/kmod-6.1-nvidia.spec Outdated Show resolved Hide resolved
@monirul monirul force-pushed the fabric-manager-changes branch 2 times, most recently from 5a837f8 to 2e576e2 Compare April 13, 2024 00:08
@monirul monirul merged commit 32e6479 into bottlerocket-os:develop Apr 13, 2024
35 checks passed
@monirul monirul deleted the fabric-manager-changes branch April 13, 2024 22:00
@ginglis13 ginglis13 mentioned this pull request May 3, 2024
14 tasks
@monirul
Copy link
Contributor Author

monirul commented May 16, 2024

I have tested the changes with p4d instances. here is the test details.

  • K8S version: 1.29
  • Instance Type: p4d.24xlarge
  • Successfully Booted?: Yes.
  • Status of the Fabric manager: Running
  • Run Nvidia Smoke Test: Success
  • Comment: nvidia-smi -q | grep -A 2 'Fabric' command shows as State: N/A, Status: N/A even though fabric manager is running successfully. The nvidia fabric manager log did not print anything suspicious and the pods are running as expect. This could be an issue in nvidia-smi commnad.

Output of systemctl status nvidia-fabricmanager

nvidia-fabricmanager.service - NVIDIA fabric manager service
     Loaded: loaded (/x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/system/nvidia-fabricmanager.service; enabled; preset: enabled)
     Active: active (running) since Thu 2024-05-16 23:22:02 UTC; 7s ago
   Main PID: 128549 (nv-fabricmanage)
      Tasks: 18 (limit: 629145)
     Memory: 11.8M
        CPU: 1.065s
     CGroup: /system.slice/nvidia-fabricmanager.service
             └─128549 /usr/libexec/nvidia/tesla/bin/nv-fabricmanager -c /etc/nvidia/fabricmanager.cfg

Output of nvidia smoke test:
=========================================
  Running sample UnifiedMemoryPerf
=========================================

GPU Device 0: "Ampere" with compute capability 8.0

Running ........................................................

Overall Time For matrixMultiplyPerf 

Printing Average of 20 measurements in (ms)
Size_KB  UMhint UMhntAs  UMeasy   0Copy MemCopy CpAsync CpHpglk CpPglAs
4         0.309   0.320   0.389   0.022   0.050   0.035   0.056   0.048
16        0.338   0.314   0.671   0.040   0.074   0.055   0.080   0.061
64        0.438   0.440   0.900   0.104   0.163   0.146   0.127   0.114
256       0.872   0.850   1.734   0.538   0.504   0.465   0.360   0.342
1024      2.423   2.255   3.276   3.580   1.878   1.763   1.338   1.325
4096      8.307   7.825  11.491  21.906   6.653   6.572   5.275   5.246
16384    34.095  33.061  47.203 183.046  33.185  33.145  27.340  27.275

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

=========================================
  Running sample deviceQuery
=========================================

./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 8 CUDA Capable device(s)

Device 0: "NVIDIA A100-SXM4-40GB"
  CUDA Driver Version / Runtime Version          12.2 / 11.4
  CUDA Capability Major/Minor version number:    8.0
  Total amount of global memory:                 40339 MBytes (42298834944 bytes)
  (108) Multiprocessors, (064) CUDA Cores/MP:    6912 CUDA Cores
  GPU Max Clock rate:                            1410 MHz (1.41 GHz)
  Memory Clock rate:                             1215 Mhz
  Memory Bus Width:                              5120-bit
  L2 Cache Size:                                 41943040 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total shared memory per multiprocessor:        167936 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 3 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Enabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Managed Memory:                Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 16 / 28
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

Device 1: "NVIDIA A100-SXM4-40GB"
  CUDA Driver Version / Runtime Version          12.2 / 11.4
  CUDA Capability Major/Minor version number:    8.0
  Total amount of global memory:                 40339 MBytes (42298834944 bytes)
  (108) Multiprocessors, (064) CUDA Cores/MP:    6912 CUDA Cores
  GPU Max Clock rate:                            1410 MHz (1.41 GHz)
  Memory Clock rate:                             1215 Mhz
  Memory Bus Width:                              5120-bit
  L2 Cache Size:                                 41943040 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total shared memory per multiprocessor:        167936 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 3 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Enabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Managed Memory:                Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 16 / 29
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

Device 2: "NVIDIA A100-SXM4-40GB"
  CUDA Driver Version / Runtime Version          12.2 / 11.4
  CUDA Capability Major/Minor version number:    8.0
  Total amount of global memory:                 40339 MBytes (42298834944 bytes)
  (108) Multiprocessors, (064) CUDA Cores/MP:    6912 CUDA Cores
  GPU Max Clock rate:                            1410 MHz (1.41 GHz)
  Memory Clock rate:                             1215 Mhz
  Memory Bus Width:                              5120-bit
  L2 Cache Size:                                 41943040 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total shared memory per multiprocessor:        167936 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 3 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Enabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Managed Memory:                Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 32 / 28
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

Device 3: "NVIDIA A100-SXM4-40GB"
  CUDA Driver Version / Runtime Version          12.2 / 11.4
  CUDA Capability Major/Minor version number:    8.0
  Total amount of global memory:                 40339 MBytes (42298834944 bytes)
  (108) Multiprocessors, (064) CUDA Cores/MP:    6912 CUDA Cores
  GPU Max Clock rate:                            1410 MHz (1.41 GHz)
  Memory Clock rate:                             1215 Mhz
  Memory Bus Width:                              5120-bit
  L2 Cache Size:                                 41943040 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total shared memory per multiprocessor:        167936 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 3 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Enabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Managed Memory:                Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 32 / 29
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

Device 4: "NVIDIA A100-SXM4-40GB"
  CUDA Driver Version / Runtime Version          12.2 / 11.4
  CUDA Capability Major/Minor version number:    8.0
  Total amount of global memory:                 40339 MBytes (42298834944 bytes)
  (108) Multiprocessors, (064) CUDA Cores/MP:    6912 CUDA Cores
  GPU Max Clock rate:                            1410 MHz (1.41 GHz)
  Memory Clock rate:                             1215 Mhz
  Memory Bus Width:                              5120-bit
  L2 Cache Size:                                 41943040 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total shared memory per multiprocessor:        167936 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 3 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Enabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Managed Memory:                Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 144 / 28
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

Device 5: "NVIDIA A100-SXM4-40GB"
  CUDA Driver Version / Runtime Version          12.2 / 11.4
  CUDA Capability Major/Minor version number:    8.0
  Total amount of global memory:                 40339 MBytes (42298834944 bytes)
  (108) Multiprocessors, (064) CUDA Cores/MP:    6912 CUDA Cores
  GPU Max Clock rate:                            1410 MHz (1.41 GHz)
  Memory Clock rate:                             1215 Mhz
  Memory Bus Width:                              5120-bit
  L2 Cache Size:                                 41943040 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total shared memory per multiprocessor:        167936 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 3 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Enabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Managed Memory:                Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 144 / 29
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

Device 6: "NVIDIA A100-SXM4-40GB"
  CUDA Driver Version / Runtime Version          12.2 / 11.4
  CUDA Capability Major/Minor version number:    8.0
  Total amount of global memory:                 40339 MBytes (42298834944 bytes)
  (108) Multiprocessors, (064) CUDA Cores/MP:    6912 CUDA Cores
  GPU Max Clock rate:                            1410 MHz (1.41 GHz)
  Memory Clock rate:                             1215 Mhz
  Memory Bus Width:                              5120-bit
  L2 Cache Size:                                 41943040 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total shared memory per multiprocessor:        167936 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 3 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Enabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Managed Memory:                Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 160 / 28
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

Device 7: "NVIDIA A100-SXM4-40GB"
  CUDA Driver Version / Runtime Version          12.2 / 11.4
  CUDA Capability Major/Minor version number:    8.0
  Total amount of global memory:                 40339 MBytes (42298834944 bytes)
  (108) Multiprocessors, (064) CUDA Cores/MP:    6912 CUDA Cores
  GPU Max Clock rate:                            1410 MHz (1.41 GHz)
  Memory Clock rate:                             1215 Mhz
  Memory Bus Width:                              5120-bit
  L2 Cache Size:                                 41943040 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total shared memory per multiprocessor:        167936 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 3 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Enabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Managed Memory:                Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 160 / 29
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
> Peer access from NVIDIA A100-SXM4-40GB (GPU0) -> NVIDIA A100-SXM4-40GB (GPU1) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU0) -> NVIDIA A100-SXM4-40GB (GPU2) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU0) -> NVIDIA A100-SXM4-40GB (GPU3) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU0) -> NVIDIA A100-SXM4-40GB (GPU4) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU0) -> NVIDIA A100-SXM4-40GB (GPU5) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU0) -> NVIDIA A100-SXM4-40GB (GPU6) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU0) -> NVIDIA A100-SXM4-40GB (GPU7) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU1) -> NVIDIA A100-SXM4-40GB (GPU0) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU1) -> NVIDIA A100-SXM4-40GB (GPU2) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU1) -> NVIDIA A100-SXM4-40GB (GPU3) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU1) -> NVIDIA A100-SXM4-40GB (GPU4) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU1) -> NVIDIA A100-SXM4-40GB (GPU5) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU1) -> NVIDIA A100-SXM4-40GB (GPU6) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU1) -> NVIDIA A100-SXM4-40GB (GPU7) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU2) -> NVIDIA A100-SXM4-40GB (GPU0) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU2) -> NVIDIA A100-SXM4-40GB (GPU1) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU2) -> NVIDIA A100-SXM4-40GB (GPU3) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU2) -> NVIDIA A100-SXM4-40GB (GPU4) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU2) -> NVIDIA A100-SXM4-40GB (GPU5) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU2) -> NVIDIA A100-SXM4-40GB (GPU6) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU2) -> NVIDIA A100-SXM4-40GB (GPU7) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU3) -> NVIDIA A100-SXM4-40GB (GPU0) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU3) -> NVIDIA A100-SXM4-40GB (GPU1) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU3) -> NVIDIA A100-SXM4-40GB (GPU2) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU3) -> NVIDIA A100-SXM4-40GB (GPU4) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU3) -> NVIDIA A100-SXM4-40GB (GPU5) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU3) -> NVIDIA A100-SXM4-40GB (GPU6) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU3) -> NVIDIA A100-SXM4-40GB (GPU7) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU4) -> NVIDIA A100-SXM4-40GB (GPU0) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU4) -> NVIDIA A100-SXM4-40GB (GPU1) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU4) -> NVIDIA A100-SXM4-40GB (GPU2) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU4) -> NVIDIA A100-SXM4-40GB (GPU3) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU4) -> NVIDIA A100-SXM4-40GB (GPU5) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU4) -> NVIDIA A100-SXM4-40GB (GPU6) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU4) -> NVIDIA A100-SXM4-40GB (GPU7) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU5) -> NVIDIA A100-SXM4-40GB (GPU0) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU5) -> NVIDIA A100-SXM4-40GB (GPU1) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU5) -> NVIDIA A100-SXM4-40GB (GPU2) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU5) -> NVIDIA A100-SXM4-40GB (GPU3) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU5) -> NVIDIA A100-SXM4-40GB (GPU4) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU5) -> NVIDIA A100-SXM4-40GB (GPU6) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU5) -> NVIDIA A100-SXM4-40GB (GPU7) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU6) -> NVIDIA A100-SXM4-40GB (GPU0) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU6) -> NVIDIA A100-SXM4-40GB (GPU1) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU6) -> NVIDIA A100-SXM4-40GB (GPU2) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU6) -> NVIDIA A100-SXM4-40GB (GPU3) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU6) -> NVIDIA A100-SXM4-40GB (GPU4) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU6) -> NVIDIA A100-SXM4-40GB (GPU5) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU6) -> NVIDIA A100-SXM4-40GB (GPU7) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU7) -> NVIDIA A100-SXM4-40GB (GPU0) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU7) -> NVIDIA A100-SXM4-40GB (GPU1) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU7) -> NVIDIA A100-SXM4-40GB (GPU2) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU7) -> NVIDIA A100-SXM4-40GB (GPU3) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU7) -> NVIDIA A100-SXM4-40GB (GPU4) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU7) -> NVIDIA A100-SXM4-40GB (GPU5) : Yes
> Peer access from NVIDIA A100-SXM4-40GB (GPU7) -> NVIDIA A100-SXM4-40GB (GPU6) : Yes

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 12.2, CUDA Runtime Version = 11.4, NumDevs = 8
Result = PASS

=========================================
  Running sample globalToShmemAsyncCopy
=========================================

[globalToShmemAsyncCopy] - Starting...
GPU Device 0: "Ampere" with compute capability 8.0

MatrixA(1280,1280), MatrixB(1280,1280)
Running kernel = 0 - AsyncCopyMultiStageLargeChunk
Computing result using CUDA Kernel...
done
Performance= 3356.72 GFlop/s, Time= 1.250 msec, Size= 4194304000 Ops, WorkgroupSize= 256 threads/block
Checking computed result for correctness: Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

=========================================
  Running sample immaTensorCoreGemm
=========================================

Initializing...
GPU Device 0: "Ampere" with compute capability 8.0

M: 4096 (16 x 256)
N: 4096 (16 x 256)
K: 4096 (16 x 256)
Preparing data for GPU...
Required shared memory size: 64 Kb
Computing... using high performance kernel compute_gemm_imma 
Time: 0.988160 ms
TOPS: 139.09

=========================================
  Running sample reductionMultiBlockCG
=========================================

reductionMultiBlockCG Starting...

GPU Device 0: "Ampere" with compute capability 8.0

33554432 elements
numThreads: 1024
numBlocks: 216

Launching SinglePass Multi Block Cooperative Groups kernel
Average time: 0.152780 ms
Bandwidth:    878.503023 GB/s

GPU result = 1.992401838303
CPU result = 1.992401361465

=========================================
  Running sample shfl_scan
=========================================

Starting shfl_scan
GPU Device 0: "Ampere" with compute capability 8.0

> Detected Compute SM 8.0 hardware with 108 multi-processors
Starting shfl_scan
GPU Device 0: "Ampere" with compute capability 8.0

> Detected Compute SM 8.0 hardware with 108 multi-processors
Computing Simple Sum test
---------------------------------------------------
Initialize test data [1, 1, 1...]
Scan summation for 65536 elements, 256 partial sums
Partial summing 256 elements with 1 blocks of size 256
Test Sum: 65536
Time (ms): 0.034176
65536 elements scanned in 0.034176 ms -> 1917.603027 MegaElements/s
CPU verify result diff (GPUvsCPU) = 0
CPU sum (naive) took 0.026300 ms

Computing Integral Image Test on size 1920 x 1080 synthetic data
---------------------------------------------------
Method: Fast  Time (GPU Timer): 0.014336 ms Diff = 0
Method: Vertical Scan  Time (GPU Timer): 0.105376 ms 
CheckSum: 2073600, (expect 1920x1080=2073600)

=========================================
  Running sample simpleAWBarrier
=========================================

./simpleAWBarrier starting...
GPU Device 0: "Ampere" with compute capability 8.0

Launching normVecByDotProductAWBarrier kernel with numBlocks = 216 blockSize = 576
Result = PASSED
./simpleAWBarrier completed, returned OK

=========================================
  Running sample simpleAtomicIntrinsics
=========================================

simpleAtomicIntrinsics starting...
GPU Device 0: "Ampere" with compute capability 8.0

Processing time: 163.516998 (ms)
simpleAtomicIntrinsics completed, returned OK

=========================================
  Running sample simpleVoteIntrinsics
=========================================

[simpleVoteIntrinsics]
GPU Device 0: "Ampere" with compute capability 8.0

> GPU device has 108 Multi-Processors, SM 8.0 compute capabilities

[VOTE Kernel Test 1/3]
        Running <<Vote.Any>> kernel1 ...
        OK

[VOTE Kernel Test 2/3]
        Running <<Vote.All>> kernel2 ...
        OK

[VOTE Kernel Test 3/3]
        Running <<Vote.Any>> kernel3 ...
        OK
        Shutting down...

=========================================
  Running sample vectorAdd
=========================================

[Vector addition of 50000 elements]
Copy input data from the host memory to the CUDA device
CUDA kernel launch with 196 blocks of 256 threads
Copy output data from the CUDA device to the host memory
Test PASSED
Done

=========================================
  Running sample warpAggregatedAtomicsCG
=========================================

GPU Device 0: "Ampere" with compute capability 8.0

CPU max matches GPU max

Warp Aggregated Atomics PASSED 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants