Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
115 commits
Select commit Hold shift + click to select a range
0e42e4b
integrate deep gemm (#1265)
lalala-sh Oct 29, 2025
6dd30b8
add a tuned config and insert entries in untuned config (#1243)
hongxiayang Oct 29, 2025
c6f1644
Enable large batch size and optimization of non-Ragged batching (#1269)
valechen Oct 29, 2025
de20a5d
add few more fw ds f4 untuned and tuned shapes for using asm kernel (…
hongxiayang Oct 30, 2025
7828dc9
CI: Optimize autotuning pipeline and inital the docs (#1286)
gyohuangxin Oct 30, 2025
4ff8d42
topk per row kernel (#1262)
ukannika Oct 30, 2025
4d1edfe
fix aot (#1279)
fsx950223 Oct 30, 2025
1bcc436
Fix ATOM fp8 model quant fail issue in torch compile (#1299)
ZhangLirong-amd Oct 30, 2025
4559713
feat - pa_fwd support block map with stride in num_kv_heads_dim (#1301)
alibaba-miji Oct 30, 2025
e8ec7b2
Fix how to update accumulator for dot_scaled (#1297)
zhanglx13 Oct 30, 2025
2c0d49d
CI: Optimize autotuning pipeline docs (#1300)
gyohuangxin Oct 31, 2025
0ec3be3
Fix the lint issue (#1307)
gyohuangxin Oct 31, 2025
edfd22a
fix fwd perf calc error (#1305)
minmengdie Oct 31, 2025
dd665b0
add the asm kernel performance of fwd and bwd (#1270)
minmengdie Oct 31, 2025
16dd510
Fused TopK and Sigmoid kernel (#1251)
samremes Oct 31, 2025
1e4318e
Ar rms (#1290)
TennyWang1223 Oct 31, 2025
4f30288
Dsv32 cache (#1314)
junhaha666 Nov 1, 2025
99b6020
Fix displaying supported architectures (#1316)
HollowMan6 Nov 1, 2025
c806ccd
using standalone pybind (#1317)
valarLip Nov 2, 2025
317c3c2
Enable mha bwd hd192_hd128 (#1308)
slippedJim Nov 3, 2025
a83384e
CI: Add pre-check status check (#1252)
gyohuangxin Nov 3, 2025
52dda0d
[CK_TILE] fmha: Add backward pass support for padded inputs (#1212)
Jeff-Huang Nov 3, 2025
18cbcca
Mla splitkv enhance split alg inte (#1233)
valarLip Nov 3, 2025
7ea8873
Fix gemm tuner error mi350 (#1313)
yzhou103 Nov 3, 2025
fc04c84
CI: Skip triton setup in Aiter standard/multigpu tests and add retrie…
gyohuangxin Nov 4, 2025
989382b
Fix global variable torch_fp8 initialization caused issue (#1322)
huizhougit Nov 4, 2025
a0c8cee
Add transpose scale to the triton fused_rms_fp8_group_quant (#1291)
tjtanaa Nov 4, 2025
f245f52
[Triton] 355 wip Llama FP4 triton fusion + TP8 triton decode shape tu…
k50112113 Nov 4, 2025
965b563
[TRITON] Kernel naming: add reusable constexpr repr helper (#1260)
Boss2002n Nov 4, 2025
474c108
Merge tuned file (#1327)
yzhou103 Nov 5, 2025
71e1838
fix graph_breaks by return tensor for bool op (#1333)
ZhangLirong-amd Nov 5, 2025
078ad73
fix_bf16gemm_asm (#1329)
amd-ruitang3 Nov 5, 2025
74932b7
Improve Memory Usage in MLA (#1338)
ruanjm Nov 5, 2025
44ff7b4
fix tune error caused by merge tuned_file (#1342)
yzhou103 Nov 5, 2025
0209376
rm rocblas in tuner (#1337)
yzhou103 Nov 5, 2025
2a92909
[Triton] DS a16w8 GEMM and fused reduce_rms_fp8_group_quant (#1328)
k50112113 Nov 5, 2025
533db7d
Add block_m=16 for a8w8_ck_moe_blockscale (#1081)
huaiguxu Nov 6, 2025
10f0496
Add Fused RMSNorm + FP8 Per-tensor Static Quantization Triton Kernel …
farlukas Nov 6, 2025
87ed78d
[TRITON] GEMM kernels nomenclature changes (#1283)
Boss2002n Nov 6, 2025
6332466
Temporarily run aiter standard and multigpu tests on the TW cluster, …
gyohuangxin Nov 7, 2025
a1f000a
[Triton] Disable failing lean attention tests (#1357)
cagrikymk Nov 7, 2025
fc89837
update ck to fix fp4 gemm issue (#1361)
gino-lu Nov 7, 2025
ddf4337
add config (#1355)
valarLip Nov 7, 2025
8a3d15a
add how_v3_bf16_cvt control to the Python API (#1351)
minmengdie Nov 7, 2025
7d21a15
[fix]: car 6 rank coredump (#1335)
TennyWang1223 Nov 7, 2025
7f67e81
Wrapper_flash_attn_backward custom op to avoid functionalize fallback…
ZhangLirong-amd Nov 7, 2025
a160d59
[TRITON] GEMM kernels nomenclature changes (#1292)
Boss2002n Nov 7, 2025
e035caf
[TRITON] Initial implementations of sparse attention kernels (#1296)
cagrikymk Nov 7, 2025
8592369
[MI35X]cktile moe a16w4 support (#1341)
solinzby1 Nov 7, 2025
e96e55b
[TRITON] Batched GEMM kernels nomenclature changes (#1293)
Boss2002n Nov 7, 2025
81b6b7d
[TRITON] Instruction shape fix for Gluon gemm_a8w8_blockscale kernel …
eky-amd Nov 7, 2025
c8ab1c6
moe mxfp4 block_m = 64/128 (#1266)
xudoyuan Nov 8, 2025
f0740ec
bug fix (#1370)
valarLip Nov 8, 2025
82dadf0
[opus] enhance opus utility (#1324)
carlushuang Nov 9, 2025
730c74e
Fix issue in metadata v1.2 where batch size is too large (#1352)
ruanjm Nov 10, 2025
d952724
[GEMM][Config] add a8w8 block scale tuned config for deepseek-v3 (#1310)
gbyu-amd Nov 10, 2025
45aeb27
support all logit values (#1323)
ukannika Nov 10, 2025
2381ba7
CI: Skip triton in Aiter standard and multigpu tests (#1374)
gyohuangxin Nov 10, 2025
3666481
add the performance data bar chart in the readme (#1372)
minmengdie Nov 10, 2025
caa0b1b
force ds ptpc moe use 1 stage moe (#1373)
junhaha666 Nov 10, 2025
1cfe5c7
[TRITON] MHA PA optimizations (#1245)
cagrikymk Nov 10, 2025
219fc47
Enable fa multii target build on other arch (#1318)
slippedJim Nov 11, 2025
a43c525
[Triton] DS FP4 triton fusion (#1371)
k50112113 Nov 11, 2025
38ed554
[TRITON] Simplify and optimize triton_kernels moe code and move it in…
lburzawa Nov 11, 2025
f6c9053
Use torch.zeros_like instead of empty_like to prevent accruacy drop (…
hubertlu-tw Nov 12, 2025
15dc85c
CI: Temporarily using old vllm nightly image (#1389)
gyohuangxin Nov 12, 2025
1335970
Revert "[Triton] DS FP4 triton fusion (#1371)" (#1392)
valarLip Nov 12, 2025
d30cbb8
add a8w8 ptpc gemm config for dsv3 (#1382)
junhaha666 Nov 12, 2025
e347c25
Test the CI on both MI325 and MI355 (#1364)
gyohuangxin Nov 13, 2025
d392d09
[Triton] change BF16 GEMM config filename (#1398)
k50112113 Nov 13, 2025
0b10ddc
Support distributed_init_method and DP in init_distributed (#1353)
ZhangLirong-amd Nov 13, 2025
bb08ef7
FA V3(fp8) and paged Attention compressed (CI green) (#1065)
micmelesse Nov 13, 2025
ac5a49c
is_shuffled (#1377)
xudoyuan Nov 14, 2025
57bfcbb
Ar rms new interface (#1401)
TennyWang1223 Nov 14, 2025
e0c7ac7
minor fix for mi355 (#1408)
valarLip Nov 14, 2025
2725ef0
[Fix] Add sliding window feature for paged_attention_v1 (#1362)
luocheng25 Nov 14, 2025
1d7d547
fused_qk_rope_cat_and_cache_mla: Fix Triton compilation error and bat…
xudonlyu Nov 15, 2025
499ce79
max mla splits perbatch (#1390)
Zzz9990 Nov 17, 2025
4be9152
topk_per_row_opt (#1394)
valarLip Nov 17, 2025
20db0ab
Fix fused_rms_mxfp4_quant comment (#1369)
drewads Nov 17, 2025
a66d880
leanAttn softmax fix for spurious data mismatch test failures (#1396)
valechen Nov 17, 2025
19513d0
Add reduce_scatter api (#1413)
ZhangLirong-amd Nov 18, 2025
239f797
fix error in fmoe_tuner (#1405)
yzhou103 Nov 18, 2025
4805108
optimize thread divergence (#1421)
carlushuang Nov 18, 2025
e702ad4
[TRITON] complex number multiplication that supports 3D ROPE triton k…
LiuYinfeng01 Nov 18, 2025
06e74dc
Feat: pa_mqa_logits performance optimization & support kv_preshuffle …
sjfeng1999 Nov 18, 2025
ed3bcef
[Config] add tuned moe and gemm config for qwen3 235b (#1378)
gbyu-amd Nov 18, 2025
842b801
fix repeated unnecessary device check (#1221)
b8zhong Nov 18, 2025
acea958
remove lru func in fake (#1429)
ZhangLirong-amd Nov 19, 2025
969bd7e
Temporarily disable the test on mi355 (#1437)
gyohuangxin Nov 19, 2025
a5ebd65
Enable MI355 test on main branch
gyohuangxin Nov 19, 2025
196a915
CI: Aiter tests bug fix
gyohuangxin Nov 19, 2025
a395ccf
[M308] tune silu&act (#1404)
zufayu Nov 19, 2025
87a9abb
add deepseek ep moe tune config (#1431)
junhaha666 Nov 19, 2025
b48237c
[TRITON] Moe a8w4 tuning (#1410)
lburzawa Nov 19, 2025
98897d1
[TRITON] Apply config-aware naming (kernel_repr) to attention kernel…
Boss2002n Nov 19, 2025
971a86d
[fix]: prebuild gen so (#1412)
TennyWang1223 Nov 19, 2025
6f11d71
[TRITON] FP8 MQA optimizations (#1422)
cagrikymk Nov 20, 2025
b8e19c8
redirect asm_moe_tkw1 call to fused_moe in order to force kernel tuni…
antsaukk Nov 20, 2025
9c8ccbe
CI: Move some tests to MI355 due to the network issue of TW cluster (…
gyohuangxin Nov 20, 2025
b2d86ef
CI: Move Triton tests from TW cluster to internal cluster (#1451)
gyohuangxin Nov 20, 2025
f8d8709
tune a8w8_blockscale&bpreshuffle for tencent (#1444)
LJ-underdog Nov 20, 2025
9a30baf
[fix]: add ar switch (#1376)
TennyWang1223 Nov 21, 2025
94d83fd
update 3rdparty/composable_kernel to e31a7a4
zhuyuhua-v Nov 23, 2025
5edc3d4
remote maxsize for mla lru cache
Zzz9990 Nov 19, 2025
9322323
enable extenal num_kv_splits & num_kv_splits_ptr
Zzz9990 Nov 21, 2025
c062ce0
fix output/lse is nan when kseq=0
minmengdie Nov 19, 2025
48cb253
fix gfx950 128_128 fwd_v3
minmengdie Nov 20, 2025
092f6ef
update the k_seq=0 error in MI300 and MI308
minmengdie Nov 20, 2025
5705ad4
update the smoke test
minmengdie Nov 21, 2025
458c325
update the smoke test
minmengdie Nov 21, 2025
0d167eb
fix MI300 and MI308 err
minmengdie Nov 21, 2025
e60feb8
fix qseq >> kseq error MI300 and MI308
minmengdie Nov 21, 2025
d679df6
fix qseq >> kseq error in MI355
minmengdie Nov 21, 2025
82abc1c
fix the MI300 error
minmengdie Nov 21, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
37 changes: 37 additions & 0 deletions .github/scripts/build_aiter_triton.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
#!/bin/bash

set -ex

echo
echo "==== ROCm Packages Installed ===="
dpkg -l | grep rocm || echo "No ROCm packages found."

echo
echo "==== Install dependencies and aiter ===="
pip install --upgrade pandas zmq einops numpy==1.26.2
pip uninstall -y aiter || true
pip install --upgrade "pybind11>=3.0.1"
pip install --upgrade "ninja>=1.11.1"
python3 setup.py develop

# Read BUILD_TRITON env var, default to 1. If 1, install Triton; if 0, skip installation.
BUILD_TRITON=${BUILD_TRITON:-1}

if [[ "$BUILD_TRITON" == "1" ]]; then
echo
echo "==== Install triton ===="
pip uninstall -y triton || true
git clone --depth=1 https://github.com/triton-lang/triton || true
cd triton
pip install -r python/requirements.txt
pip install filecheck
MAX_JOBS=64 pip --retries=5 install .
cd ..
else
echo
echo "[SKIP] Triton installation skipped because BUILD_TRITON=$BUILD_TRITON"
fi

echo
echo "==== Show installed packages ===="
pip list
27 changes: 0 additions & 27 deletions .github/scripts/build_triton.sh

This file was deleted.

34 changes: 34 additions & 0 deletions .github/scripts/check_signal.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
#!/bin/bash

# This script attempts to download a pre-checks artifact from a GitHub workflow up to 5 times.
# If the artifact is found and the signal indicates success, the workflow continues.
# If the signal indicates failure, the workflow is skipped with details printed.
# If the artifact cannot be downloaded after all retries, the workflow exits with an error.

set -e

ARTIFACT_NAME="checks-signal-${GITHUB_SHA:-${1}}"
MAX_RETRIES=5

for i in $(seq 1 $MAX_RETRIES); do
echo "Attempt $i: Downloading artifact..."
if gh run download --name "$ARTIFACT_NAME"; then
if [ -f checks_signal.txt ]; then
echo "Artifact $ARTIFACT_NAME downloaded successfully."
SIGNAL=$(head -n 1 checks_signal.txt)
if [ "$SIGNAL" = "success" ]; then
echo "Pre-checks passed, continuing workflow."
exit 0
else
echo "Pre-checks failed, skipping workflow. Details:"
tail -n +2 checks_signal.txt
exit 78 # 78 = neutral/skip
fi
fi
fi
echo "Artifact not found, retrying in 30s..."
sleep 30
done

echo "Failed to download pre-checks artifact after $MAX_RETRIES attempts. Exiting workflow."
exit 1
20 changes: 17 additions & 3 deletions .github/scripts/op_tune.sh
Original file line number Diff line number Diff line change
Expand Up @@ -18,17 +18,31 @@ testFailedFiles=()
declare -a tune_jobs=(
"ck_batched_gemm_a8w8:csrc/ck_batched_gemm_a8w8:op_tests/test_batched_gemm_a8w8.py:python3 csrc/ck_batched_gemm_a8w8/batched_gemm_a8w8_tune.py -i aiter/configs/a8w8_untuned_batched_gemm.csv -o aiter/configs/a8w8_tuned_batched_gemm.csv"
"ck_batched_gemm_bf16:csrc/ck_batched_gemm_bf16:op_tests/test_batched_gemm_bf16.py:python3 csrc/ck_batched_gemm_bf16/batched_gemm_bf16_tune.py -i aiter/configs/bf16_untuned_batched_gemm.csv -o aiter/configs/bf16_tuned_batched_gemm.csv"
# "csrc/ck_gemm_a4w4_blockscale:op_tests/test_gemm_a4w4_blockscale.py:python3 csrc/ck_gemm_a4w4_blockscale/gemm_a4w4_blockscale_tune.py -i aiter/configs/a4w4_blockscale_untuned_gemm.csv -o aiter/configs/a4w4_blockscale_tuned_gemm.csv"
"ck_gemm_a8w8:csrc/ck_gemm_a8w8:op_tests/test_gemm_a8w8.py:python3 csrc/ck_gemm_a8w8/gemm_a8w8_tune.py -i aiter/configs/a8w8_untuned_gemm.csv -o aiter/configs/a8w8_tuned_gemm.csv"
"ck_gemm_a8w8_blockscale:csrc/ck_gemm_a8w8_blockscale:op_tests/test_gemm_a8w8_blockscale.py:python3 csrc/ck_gemm_a8w8_blockscale/gemm_a8w8_blockscale_tune.py -i aiter/configs/a8w8_blockscale_untuned_gemm.csv -o aiter/configs/a8w8_blockscale_tuned_gemm.csv"
"ck_gemm_a8w8_blockscale_bpreshuffle:csrc/ck_gemm_a8w8_blockscale_bpreshuffle:op_tests/test_gemm_a8w8_blockscale.py:python3 csrc/ck_gemm_a8w8_blockscale_bpreshuffle/gemm_a8w8_blockscale_bpreshuffle_tune.py -i aiter/configs/a8w8_blockscale_bpreshuffle_untuned_gemm.csv -o aiter/configs/a8w8_blockscale_bpreshuffle_tuned_gemm.csv"
"ck_gemm_a8w8_bpreshuffle:csrc/ck_gemm_a8w8_bpreshuffle:op_tests/test_gemm_a8w8.py:python3 csrc/ck_gemm_a8w8_bpreshuffle/gemm_a8w8_bpreshuffle_tune.py -i aiter/configs/a8w8_bpreshuffle_untuned_gemm.csv -o aiter/configs/a8w8_bpreshuffle_tuned_gemm.csv"
#"ck_gemm_a4w4_blockscale:csrc/ck_gemm_a4w4_blockscale:op_tests/test_gemm_a4w4_blockscale.py:python3 csrc/ck_gemm_a4w4_blockscale/gemm_a4w4_blockscale_tune.py -i aiter/configs/a4w4_blockscale_untuned_gemm.csv -o aiter/configs/a4w4_blockscale_tuned_gemm.csv"
)

for job in "${tune_jobs[@]}"; do
IFS=':' read -r shape dir test_path tune_cmd <<< "$job"
if [ -n "$shape_filter" ] && [ "$shape" != "$shape_filter" ]; then
continue
# If shape_filter is not empty, check if the current shape exists in the filter list.
# shape_filter is a comma-separated list, e.g. "ck_gemm_a8w8,ck_batched_gemm_a8w8"
if [ -n "$shape_filter" ]; then
# Remove all whitespace from the shape_filter string
shape_filter_no_space="${shape_filter//[[:space:]]/}"
IFS=',' read -ra filter_shapes <<< "$shape_filter_no_space"
found_match=false
for filter_shape in "${filter_shapes[@]}"; do
if [[ "$shape" == "$filter_shape" ]]; then
found_match=true
break
fi
done
if [ "$found_match" = false ]; then
continue
fi
fi
echo "============================================================"
echo "🧪 Processing shape: $shape under directory: $dir"
Expand Down
27 changes: 20 additions & 7 deletions .github/workflows/aiter-test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,21 @@ env:
DOCKER_IMAGE: "rocm/pytorch:latest"

jobs:
check-signal:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4

- name: Download and check signal artifact
run: ./.github/scripts/check_signal.sh
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITHUB_SHA: ${{ github.sha }}

define-runners:
runs-on: ubuntu-latest
needs: [check-signal]
outputs:
standard_runners: ${{ steps.machines.outputs.standard_runners }}
multigpu_runners: ${{ steps.machines.outputs.multigpu_runners }}
Expand All @@ -28,15 +41,15 @@ jobs:
set -euo pipefail
pr_title="${{ github.event.pull_request.title }}"
if [[ "${{ github.ref }}" == "refs/heads/main" ]]; then
echo "It's main branch, running tests on MI300 and MI35X..."
echo "It's main branch, running tests on MI325 and MI35X..."
echo 'standard_runners=["aiter-mi355-1gpu"]' >> "$GITHUB_OUTPUT"
echo 'multigpu_runners=["aiter-mi355-8gpu"]' >> "$GITHUB_OUTPUT"
elif echo "$pr_title" | grep -qi "mi35x"; then
echo "PR title contains 'MI35X', running tests on MI300 and MI35X..."
echo "PR title contains 'MI35X', running tests on MI325 and MI35X..."
echo 'standard_runners=["aiter-mi355-1gpu"]' >> "$GITHUB_OUTPUT"
echo 'multigpu_runners=["aiter-mi355-8gpu"]' >> "$GITHUB_OUTPUT"
else
echo "Not main branch and PR title does not contain mi35x, only running on MI300..."
echo "Not main branch and PR title does not contain mi35x, only running on MI325..."
echo 'standard_runners=["aiter-mi355-1gpu"]' >> "$GITHUB_OUTPUT"
echo 'multigpu_runners=["aiter-mi355-8gpu"]' >> "$GITHUB_OUTPUT"
fi
Expand Down Expand Up @@ -91,14 +104,14 @@ jobs:
--name aiter_test \
${{ env.DOCKER_IMAGE }}

- name: Setup-Triton
- name: Setup Aiter
run: |
set -ex
echo "Setting up Triton..."
echo "Setting up Aiter..."
docker exec \
-w /workspace \
aiter_test \
./.github/scripts/build_triton.sh
bash -c "BUILD_TRITON=0 ./.github/scripts/build_aiter_triton.sh"

- name: Tests
run: |
Expand Down Expand Up @@ -177,7 +190,7 @@ jobs:
docker exec \
-w /workspace \
aiter_test \
./.github/scripts/build_triton.sh
bash -c "BUILD_TRITON=0 ./.github/scripts/build_aiter_triton.sh"

- name: Tests
run: |
Expand Down
8 changes: 0 additions & 8 deletions .github/workflows/black.yaml

This file was deleted.

19 changes: 0 additions & 19 deletions .github/workflows/deps.yaml

This file was deleted.

11 changes: 4 additions & 7 deletions .github/workflows/operators-tuning.yaml
Original file line number Diff line number Diff line change
@@ -1,13 +1,10 @@
name: Operators Tuning

on:
pull_request:
paths:
- 'aiter/configs/*untuned*.csv'
workflow_dispatch:
inputs:
shapes:
description: 'Comma separated shape names to run (leave empty for all)'
description: 'Comma separated shape names to run, e.g. ck_batched_gemm_a8w8, ck_gemm_a8w8, ck_gemm_a8w8_blockscale, ck_gemm_a8w8_blockscale_bpreshuffle, ck_gemm_a8w8_bpreshuffle etc. (leave empty for all)'
required: false
default: ''
arguments:
Expand Down Expand Up @@ -57,14 +54,14 @@ jobs:
--name operators_tuning_test \
rocm/pytorch:latest

- name: Setup-Triton
- name: Setup Aiter and Triton
run: |
set -ex
echo "Setting up Triton..."
echo "Setting up Aiter and Triton..."
docker exec \
-w /workspace \
operators_tuning_test \
./.github/scripts/build_triton.sh
bash -c "BUILD_TRITON=0 ./.github/scripts/build_aiter_triton.sh"

- name: Show Computing Units
run: |
Expand Down
88 changes: 88 additions & 0 deletions .github/workflows/pre-checks.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
name: Checks

on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]

jobs:
check-ck:
name: Check Repository Dependency
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
submodules: 'recursive'

- name: Verify 3rdparty commits
run: ./.github/scripts/check_deps.sh

black:
name: Check Code Style with Black
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run Black
uses: psf/black@stable

ruff:
name: Check Code Style with Ruff
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python environment
uses: actions/setup-python@v2
with:
python-version: "3.12"
- name: Install dependencies
run: pip3 install ruff
- name: Install reviewdog
uses: reviewdog/action-setup@e04ffabe3898a0af8d0fb1af00c188831c4b5893
- name: Run ruff with reviewdog
env:
REVIEWDOG_GITHUB_API_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
ruff check . -e | reviewdog -efm="%f:%l:%c: %m" -diff="git diff FETCH_HEAD" -reporter=github-pr-check -tee

upload-success-artifact:
name: Upload Success Signal
runs-on: ubuntu-latest
needs: [check-ck, black, ruff]
steps:
- name: Create success signal file
run: echo "success" > checks_signal.txt

- name: Upload success artifact
uses: actions/upload-artifact@v4
with:
name: checks-signal-${{ github.sha }}
path: checks_signal.txt

upload-failure-artifact:
name: Upload Failure Signal
runs-on: ubuntu-latest
needs: [check-ck, black, ruff]
if: ${{ always() && (needs.check-ck.result != 'success' || needs.black.result != 'success' || needs.ruff.result != 'success') }}
steps:
- name: Create failure signal file with failed jobs
run: |
echo "failure" > checks_signal.txt
if [ "${{ needs.check-ck.result }}" != "success" ]; then
echo "FAILED: check-ck (${{ needs.check-ck.result }})" >> checks_signal.txt
fi
if [ "${{ needs.black.result }}" != "success" ]; then
echo "FAILED: black (${{ needs.black.result }})" >> checks_signal.txt
fi
if [ "${{ needs.ruff.result }}" != "success" ]; then
echo "FAILED: ruff (${{ needs.ruff.result }})" >> checks_signal.txt
fi

- name: Upload failure artifact
uses: actions/upload-artifact@v4
with:
name: checks-signal-${{ github.sha }}
path: checks_signal.txt
20 changes: 0 additions & 20 deletions .github/workflows/ruff.yaml

This file was deleted.

Loading