Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Only test ones we know will succeed #1628

Merged
merged 3 commits into from
Sep 13, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 7 additions & 11 deletions .github/workflows/network-test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,8 @@ jobs:
# Currently this is just a label and does not have any functional impact.
peers: [3]
scaling_factor: [10, 50]
netem_loss: [0, 1, 2, 3, 4, 5, 10, 20]
# Note: We only put here the configuration values we _expected to pass_.
netem_loss: [0, 1, 2, 3]
name: "Peers: ${{ matrix.peers }}, scaling: ${{ matrix.scaling_factor }}, loss: ${{ matrix.netem_loss }}"
steps:
- uses: actions/checkout@v4
Expand Down Expand Up @@ -61,6 +62,8 @@ jobs:

- name: Setup containers for network testing
run: |
set -exo pipefail

cd demo
./prepare-devnet.sh
docker compose up -d cardano-node
Expand All @@ -73,9 +76,10 @@ jobs:
--node-socket devnet/node.socket \
--cardano-signing-key devnet/credentials/faucet.sk)

echo $HYDRA_SCRIPTS_TX_ID >> .env
echo "HYDRA_SCRIPTS_TX_ID=$HYDRA_SCRIPTS_TX_ID" > .env

nix run .#cardano-cli query protocol-parameters \
nix run .#cardano-cli -- query protocol-parameters \
--testnet-magic 42 \
--socket-path devnet/node.socket \
--out-file /dev/stdout \
| jq ".txFeeFixed = 0 | .txFeePerByte = 0 | .executionUnitPrices.priceMemory = 0 | .executionUnitPrices.priceSteps = 0" \
Expand Down Expand Up @@ -103,14 +107,6 @@ jobs:
limit-access-to-actor: true

- name: Run pumba and the benchmarks
# Note: We're going to allow everything to fail. In the job on GitHub,
# we will be able to see which ones _did_, in fact, fail. Originally,
# we were keeping track of our expectations with 'include' and
# 'exclude' directives here, but I think it's best to leave those out,
# as some of the tests (say 5%) fail, and overall the conditions of
# failure depend on the scaling factor, the peers, etc, and it becomes
# too complicated to track here.
continue-on-error: true
run: |
# Extract inputs with defaults for non-workflow_dispatch events
percent="${{ matrix.netem_loss }}"
Expand Down