Skip to content

Caliper How To

Emily Howell edited this page Sep 17, 2024 · 22 revisions

Scratch pad for Caliper + Thicket/Hatchet notes.

General Caliper Services

Runtime Report

CALI_CONFIG=runtime-report

Event Tracing

CALI_CONFIG=event-trace

Sample Based Profiling

CALI_CONFIG=sample-report

GPU Caliper Services

https://software.llnl.gov/Caliper/GPUProfiling.html

(TODO: list most useful config/services)

CMake options to CUDA GPU Support

-DWITH_CUPTI=ON -DWITH_NVTX=ON -DCUDA_TOOLKIT_ROOT_DIR={path} -DCUPTI_PREFIX={path}

CMake options to enable ROCm GPU Support:

-DWITH_ROCTRACER=ON -DWITH_ROCTX=ON -DROCM_PREFIX={path}
CALI_CONFIG=runtime-report,profile.hip ./build-rzvernal/tests/ascent/t_ascent_gpu_data_source
CALI_CONFIG=runtime-report,rocm.gputime ./build-rzvernal/tests/ascent/t_ascent_gpu_data_source
CALI_CONFIG=rocm-activity-report,show_kernels ./build-rzvernal/tests/ascent/t_ascent_gpu_data_source

What is the best strategy to instrument our data flow networks?

Things we want to understand:

  • Timings
  • Aggregate host to device and device to host transfers
  • Aggregate kernel launch overhead

MPI Caliper Services

CMake option to enable MPI Support:

-DWITH_MPI=ON

https://software.llnl.gov/Caliper/MPIProfiling.html

Overall MPI Report

CALI_CONFIG=mpi-report

Profile MPI calls inside of Caliper annotated regions

CALI_CONFIG=runtime-report,profile.mpi 

(TODO: Are there MPI build constraints?)

Ascent Caliper Integration Tasks

Allow ascent_options.yaml entries to control caliper services.

We support options for Caliper config, services, and output_file:

https://github.com/Alpine-DAV/ascent/blob/207d6fb7cc57352640abff9f0aee77b70842821b/src/libs/ascent/utils/ascent_annotations.cpp#L76

Caliper Visualization

Print Call Tree from Cali Record File

import hatchet as ht
gf = ht.GraphFrame.from_caliperreader("output.cali")
print(gf.tree())

Print Call Tree from Cali JSON File

import hatchet as ht
gf = ht.GraphFrame.from_caliper("cali_output.json")
print(gf.tree())

Simple Virtual Env Setup w/ Hatchet and Thicket Perf Tools

python3 -m venv caliper_viz
caliper_viz/bin/pip3 install llnl-hatchet llnl-thicket
caliper_vis/bin/python3 -c "import hatchet as ht;gf = ht.GraphFrame.from_caliperreader('output.cali'); print(gf.tree());"

Example to convert cali record file to json:

# needs work, output isn't parsable by ht.GraphFrame.from_caliper
cali-query --json output.cali -o cali_output.json

Misc useful things

To create trace format that can be viz-ed in chrome:

cali2traceevent trace.cali trace.json
env CALI_CONFIG=runtime-report,region.count,output=report.txt basic_example
CALI_CONFIG=runtime-report,print.metadata,output=report.txt mpirun -n 8 lulesh2.0 -i 4
adiak_collect_all()//  -- capture all entries 

http://software.llnl.gov/Adiak/ApplicationAPI.html#_CPPv417adiak_collect_allv

cali-query –help=services # to list all available services

Questions + Ideas:

  • What thicket / hatchet standard plotting methods should we use?
  • We should create a tutorial example that compares an OpenMP run to a GPU run + shows how to compare the trees
Clone this wiki locally