Timing info in external test JSON reports#15023
Merged
Conversation
01dfa10 to
7248746
Compare
r0qs
reviewed
Apr 15, 2024
7248746 to
6eceae6
Compare
6eceae6 to
019d45e
Compare
This was referenced Apr 15, 2024
r0qs
approved these changes
Apr 15, 2024
Collaborator
Author
|
So, with this PR merged, this is the hacky way to quickly get a timing table from external test benchmarks: Scriptjq '[
["brink", ."brink" ."ir-optimize-evm+yul".compilation_time.user],
["colony", ."colony" ."ir-optimize-evm+yul".compilation_time.user],
["elementfi", ."elementfi" ."ir-optimize-evm+yul".compilation_time.user],
["ens", ."ens" ."ir-optimize-evm+yul".compilation_time.user],
["euler", ."euler" ."ir-optimize-evm+yul".compilation_time.user],
["gnosis", ."gnosis" ."ir-optimize-evm+yul".compilation_time.user],
["gp2", ."gp2" ."ir-optimize-evm+yul".compilation_time.user],
["perpetual-pools", ."perpetual-pools" ."ir-optimize-evm+yul".compilation_time.user],
["pool-together", ."pool-together" ."ir-optimize-evm+yul".compilation_time.user],
["uniswap", ."uniswap" ."ir-optimize-evm+yul".compilation_time.user],
["yield_liquidator", ."yield_liquidator" ."ir-optimize-evm+yul".compilation_time.user],
["zeppelin", ."zeppelin" ."ir-optimize-evm+yul".compilation_time.user]
]' all-benchmarks.json | python -c "$(cat <<EOF
import tabulate, json, sys
print(tabulate.tabulate(json.load(sys.stdin), tablefmt="github", headers=["Test", "Time"]))
EOF
)"Could be improved to avoid having to list the keys by hand, and to round the numbers, but for now it's good enough. EDIT: Version without hard-coded keys: jq '[to_entries[] | [.key, .value."ir-optimize-evm+yul".compilation_time.user]]' all-benchmarks.json | python -c "$(cat <<EOF
import tabulate, json, sys
print(tabulate.tabulate(json.load(sys.stdin), tablefmt="github", headers=["Test", "Time"]))
EOF
)"Result
|
Collaborator
Author
|
Here are some more streamlined scripts, with automatic iteration over keys, rounding and downloading of benchmark results. To use them, just fill in the branch name and paste it into shell. Timing of a single branchbranch="<BRANCH NAME HERE>"
preset=ir-optimize-evm+yul
function timing-table-script {
cat <<EOF
import tabulate, json, sys
def as_seconds(value):
return (str(value) + " s") if value is not None else None
table = json.load(sys.stdin)
table = [[row[0], as_seconds(row[1])] + row[2:] for row in table]
headers = ["Project", "Time"]
alignment = ("left", "right")
print(tabulate.tabulate(table, tablefmt="pipe", headers=headers, colalign=alignment))
EOF
}
function tabulate-ext-timing {
python -c "$(timing-table-script)" "$@"
}
function ext-timing-list {
local preset="$1"
jq "[to_entries[] | [.key, (.value.\"${preset}\".compilation_time.user | if . != null then round else . end)]]" "${@:2}"
}
scripts/externalTests/download_benchmarks.py --branch "$branch"
cat "all-benchmarks-${branch}"*.json | ext-timing-list "$preset" | tabulate-ext-timing
Timing comparison between two branchesbefore_branch=develop
after_branch="<BRANCH NAME HERE>"
preset=ir-optimize-evm+yul
function diff-table-script {
cat <<EOF
import tabulate, json, sys
def time_diff(before, after):
return (after - before) if after is not None and before is not None else None
def as_seconds(value):
return (str(value) + " s") if value is not None else None
data = json.load(sys.stdin)
table = [[
project,
as_seconds(data[0][project]),
as_seconds(data[1][project]),
as_seconds(time_diff(data[0][project], data[1][project])),
] for project in data[0].keys()]
headers = ["Project", "Before", "After", "Diff"]
alignment = ("left", "right", "right", "right")
print(tabulate.tabulate(table, tablefmt="pipe", headers=headers, colalign=alignment))
EOF
}
function tabulate-ext-timing-diff {
python -c "$(diff-table-script)" "$@"
}
function ext-timing-dict {
local preset="$1"
jq "[to_entries[] | {(.key): (.value.\"${preset}\".compilation_time.user | if . != null then round else . end)}] | add" "${@:2}"
}
scripts/externalTests/download_benchmarks.py --branch "$before_branch"
scripts/externalTests/download_benchmarks.py --branch "$after_branch"
{
cat "all-benchmarks-${before_branch}"*.json | ext-timing-dict "$preset"
cat "all-benchmarks-${after_branch}"*.json | ext-timing-dict "$preset"
} | jq --slurp | tabulate-ext-timing-diff
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Gathering info about compilation time in external tests to create a comparison like #14909 (comment) is incredibly time consuming. I have to navigate to 10-15 CI pages, locate
timeoutput in the long CI log for each, copy it and manually format it info something human-readable, like a table. This PRs is the first step toward automating this somewhat.Now that info will be present in the combined JSON report from all external tests and I'll be able to pull it out with a simple script. This is just the bare minimum to make it less annoying for me. The extra data is not yet processed by the scripts that format gas tables or diff them. For now I'm planning to create a quick, hacky script to do further processing but if it's reusable enough I might submit it in a follow-up PR.
The new info can be found in the
reports/externalTests/all-benchmarks.jsonartifact of thec_ext_benchmarksjob, under the<project>.<preset>.compilation_timekeys. It's also present in the same form in individual reports attached as artifacts to each parallel run of each external test job.