Skip to content
This repository has been archived by the owner on Mar 21, 2024. It is now read-only.

Conversation

brycelelbach
Copy link
Collaborator

@brycelelbach brycelelbach commented Feb 6, 2020

It messes up register allocation and increases register pressure, and we don't
actually know at compile time how many blocks we will use (aside from single
tile kernels).

For inclusive scan on 16-bit, the current launch bounds annotation tells the compiler that there are 128 threads per block, and only 1 block per SM. On GV100, this means the compiler is free to use up to 255 registers without worrying about occupancy. A compiler change is pushing the per-thread register usage from 39 to 57 registers. Even though the compiler is able to still schedule the kernel using 39 registers without spills, the launch_bounds annotation is telling it not to bother because there are only 4 warps per SM.

However, the kernel is actually executed with many blocks per SM, causing occupancy-related performance issues. See #10 for some results.

The launch bounds are coming from scan.h in THRUST:

  template <int                      _BLOCK_THREADS,
            int                      _ITEMS_PER_THREAD = 1,
            cub::BlockLoadAlgorithm  _LOAD_ALGORITHM   = cub::BLOCK_LOAD_DIRECT,
            cub::CacheLoadModifier   _LOAD_MODIFIER    = cub::LOAD_DEFAULT,
            cub::BlockStoreAlgorithm _STORE_ALGORITHM  = cub::BLOCK_STORE_DIRECT,
            cub::BlockScanAlgorithm  _SCAN_ALGORITHM   = cub::BLOCK_SCAN_WARP_SCANS,
            int                      _MIN_BLOCKS       = 1>
  struct PtxPolicy

Empirically, setting _MIN_BLOCKS to 12 constrains the compiler to use only 40 registers, which recovers the performance (and occupancy).

method=[ _ZN6thrust8cuda_cub4core13_kernel_agentINS0_6__scan9ScanAgentINS_6detail15normal_iteratorINS_10device_ptrIsEEEES9_NS_4plusIsEEisNS5_17integral_constantIbLb1EEEEES9_S9_SB_iN3cub13ScanTileStateIsLb1EEENS3_9DoNothingIsEEEEvT0_T1_T2_T3_T4_T5_ ] gputime=[ 378.016 ] cputime=[ 3.122 ] gridsize=[ 21846, 1, 1 ] threadblocksize=[ 128, 1, 1 ] regperthread=[ 40 ] occupancy=[ 0.750 ]

Note that the perf is actually slightly better than the baseline run without the compiler change.

Internal CI Job

Bug 2826490

Reviewed-by: Michał 'Griwes' Dominiak [email protected]

…use it

messes up register allocation and increases register pressure, and we don't
actually know at compile time how many blocks we will use (aside from single
tile kernels).

Bug 2826490

Reviewed-by: Michał 'Griwes' Dominiak <[email protected]>
Copy link
Collaborator

@griwes griwes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@brycelelbach brycelelbach merged commit d4b7985 into master Feb 7, 2020
@brycelelbach brycelelbach deleted the bug/nvidia/remove-min-blocks-from-launch-bounds-to-minimize-register-pressure/2826490 branch February 7, 2020 12:59
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants