-
Notifications
You must be signed in to change notification settings - Fork 3
Open
Description
After running tests, I noticed that Transformer takes an long time. Note that I only observed this for T4/V100 GPUs
Here is the log for transformer
stderr /opt/conda/lib/python3.8/site-packages/torch/cuda/memory.py:282: FutureWarning: torch.cuda.reset_max_memory_allocated now calls torch.cuda.reset_peak_memory_stats, which resets /all/ peak memory stats.
stderr warnings.warn(
stderr 2022-12-22 19:08 INFO Skyline interactive profiling session started! Listening on port 60120.
stderr 2022-12-22 19:08 INFO Project Root: /workspace/ml_tools_test/ml_tools_test/test_models/transformer
stderr 2022-12-22 19:08 INFO Entry Point: entry_point.py
Received message of length 205
new message. total: 1
total time is 2.7277989748399705
Received message of length 22585
new message. total: 2
total time is 46.452650535153225
Received message of length 78
new message. total: 3
total time is 9.933224212843925
Received message of length 104
new message. total: 4
total time is 707.5594325081911
For comparison, here is the log for densenet
stderr 2022-12-22 19:05 INFO Skyline interactive profiling session started! Listening on port 60120.
stderr 2022-12-22 19:05 INFO Project Root: /workspace/ml_tools_test/ml_tools_test/test_models/densenet
stderr 2022-12-22 19:05 INFO Entry Point: entry_point.py
Received message of length 202
new message. total: 1
total time is 2.679952011909336
Received message of length 2152
new message. total: 2
total time is 27.73253683000803
Received message of length 78
new message. total: 3
total time is 8.593907973030582
Received message of length 104
new message. total: 4
total time is 24.75442389305681
Metadata
Metadata
Assignees
Labels
No labels