You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi I noticed the PR #5602 has been merged to the main branch. Thank you!
If I were to compare the performance between Li2018 and Anderson2021, how do I configure machine_param=n_threads,cache_size,compute_cache_tradeoff accordingly to equalize the autoscheduler environment?
Similarly, how do I configure the machine_param to compare the performance between Mullapudi2016 and Anderson2021?
To clarify, the current version of Maullapudi2016 can only generate CPU schedules. But, the original paper suggested a manual intervention step to generate GPU schedules. That is, to edit the generated *.schedule.h code to replace all .vectorize(...).parallel(...) with gpu_threads(...).gpu_tiles(...) in accordance to the following rules. In the past, I found it highly effective for algorithms having less than ~5 Halide stages.
We configure the auto-scheduler to target the GPU by setting the PARALLELISM_THRESHOLD to 128, VECTOR_WIDTH to 32, and
CACHE_SIZE to 48 KB. Additionally, we add two new parameters
TARGET_THREADS_PER_BLOCK and MAX_THREADS_PER_BLOCK whose values are set to 128 and 2048 respectively. These parameters enable the
auto-scheduler to avoid tiling configurations that generate too few
or too many threads per GPU thread block. The inlining, tiling, and
grouping processes are otherwise similar to the CPU case. Groups
resulting from merging are mapped to CUDA kernels by designating
the outer tile loops as GPU block grid dimensions and the inner tile
loops as GPU thread block dimensions. All intermediate buffers
within a group are allocated in GPU shared memory. http://graphics.cs.cmu.edu/projects/halidesched/
The text was updated successfully, but these errors were encountered:
antonysigma
changed the title
How to evaluate/compare the GPU auto-scheduler performance?
How to evaluate/compare the GPU auto-scheduler performance btw Anderson2021 and Mullapudi2016?
Apr 10, 2023
how do I configure machine_param=n_threads,cache_size,compute_cache_tradeoff accordingly to equalize the autoscheduler environment?
Found it. since Halide 16.0, the machine_param command line argument becomes a named struct. Previously, we specify the params with comma separated tuple.
Hi I noticed the PR #5602 has been merged to the main branch. Thank you!
If I were to compare the performance between
Li2018
andAnderson2021
, how do I configuremachine_param=n_threads,cache_size,compute_cache_tradeoff
accordingly to equalize the autoscheduler environment?Similarly, how do I configure the
machine_param
to compare the performance betweenMullapudi2016
andAnderson2021
?To clarify, the current version of
Maullapudi2016
can only generate CPU schedules. But, the original paper suggested a manual intervention step to generate GPU schedules. That is, to edit the generated*.schedule.h
code to replace all.vectorize(...).parallel(...)
withgpu_threads(...).gpu_tiles(...)
in accordance to the following rules. In the past, I found it highly effective for algorithms having less than ~5 Halide stages.The text was updated successfully, but these errors were encountered: