Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set up benchmark CI #11

Merged
merged 1 commit into from
Dec 23, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 38 additions & 0 deletions .github/workflows/Benchmark.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
name: Run benchmarks

on:
pull_request:
types: [labeled, opened, synchronize, reopened]
workflow_dispatch:
jobs:
Benchmark:
runs-on: ubuntu-latest
if: contains(github.event.pull_request.labels.*.name, 'run benchmark')
steps:
- uses: actions/checkout@v2
- uses: julia-actions/setup-julia@latest
- name: Cache artifacts
uses: actions/cache@v1
env:
cache-name: cache-artifacts
with:
path: ~/.julia/artifacts
key: ${{ runner.os }}-test-${{ env.cache-name }}-${{ hashFiles('**/Project.toml') }}
restore-keys: |
${{ runner.os }}-test-${{ env.cache-name }}-
${{ runner.os }}-test-
${{ runner.os }}-
- name: Install dependencies
run: |
julia --project=./benchmark -e '
using Pkg;
Pkg.develop(PackageSpec(path=pwd()));
Pkg.develop(PackageSpec(path=joinpath(pwd(), "CheckedArithmeticCore")));
Pkg.instantiate();
'
- name: Run benchmarks
run: julia --project=./benchmark -e 'using BenchmarkCI; BenchmarkCI.judge(project="benchmark")'
- name: Post results
run: julia --project=./benchmark -e 'using BenchmarkCI; BenchmarkCI.postjudge()'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
1 change: 1 addition & 0 deletions .github/workflows/UnitTest.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ jobs:
with:
version: ${{ matrix.julia-version }}
arch: ${{ matrix.julia-arch }}
show-versioninfo: true
- run: julia --project -e 'using Pkg; Pkg.develop([PackageSpec(path="CheckedArithmeticCore")])'

- name: Cache artifacts
Expand Down
30 changes: 29 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,31 @@
# Files generated by invoking Julia with --code-coverage
*.jl.cov
*.jl.*.cov

# Files generated by invoking Julia with --track-allocation
*.jl.mem

# System-specific files and directories generated by the BinaryProvider and BinDeps packages
# They contain absolute paths specific to the host computer, and so should not be committed
deps/deps.jl
deps/build.log
deps/downloads/
deps/usr/
deps/src/

# Build artifacts for creating documentation generated by the Documenter package
docs/build/
docs/site/

# File generated by Pkg, the package manager, based on a corresponding Project.toml
# It records a fixed state of all packages used by the project. As such, it should not be
# committed for packages, but should be committed for applications that require a static
# environment.
Manifest.toml

# Files enerated by BenchmarkCI
/.benchmarkci
/benchmark/*.json

.DS_Store
/Manifest.toml
/dev/
6 changes: 6 additions & 0 deletions benchmark/Project.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
[deps]
BenchmarkCI = "20533458-34a3-403d-a444-e18f38190b5b"
BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
CheckedArithmetic = "2c4a1fb8-30c1-4c71-8b84-dff8d59868ee"
CheckedArithmeticCore = "740b204e-26e5-40b1-866a-9c367e60c4b6"
PkgBenchmark = "32113eaa-f34f-5b0d-bd6c-c81e245fc73d"
31 changes: 31 additions & 0 deletions benchmark/benchmarks.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
using CheckedArithmeticCore
using BenchmarkTools
using Base.Checked # TODO: re-export

BenchmarkTools.DEFAULT_PARAMETERS.seconds = 1

xs = Dict{Type, Matrix}()
ys = Dict{Type, Matrix}()
zs = Dict{Type, Matrix}()

eltypes = (Int8, UInt8, Int16, UInt16, Int32, UInt32)
for T in eltypes
push!(xs, T => rand(T, 1000, 1000))
push!(ys, T => rand(T, 1000, 1000))
push!(zs, T => zeros(T, 1000, 1000))
end

SUITE = BenchmarkGroup()
SUITE["add"] = BenchmarkGroup([],
"wrapping" => BenchmarkGroup(),
"saturating" => BenchmarkGroup(),
"checked" => BenchmarkGroup(),
)

for T in eltypes
x = xs[T]::Matrix{T}
y = ys[T]::Matrix{T}
z = zs[T]::Matrix{T}
t = string(T)
SUITE["add"]["checked" ][t] = @benchmarkable checked_add.($x, $z)
end