Skip to content

Conversation

@wileyj
Copy link
Collaborator

@wileyj wileyj commented Oct 13, 2025

ref #6556

cc @hugoclrd

This change adds a second arch to the docker image built manually. it will now build amd64 and arm64 variants, at the expense of taking quite a long time - in a single testrun, it took about 120 minutes.

this could be sped up by throwing hardware at it, but i also have another idea for a longer term fix that this workflow would also benefit from.

@wileyj wileyj requested review from a team as code owners October 13, 2025 22:33
@wileyj wileyj linked an issue Oct 13, 2025 that may be closed by this pull request
Copy link
Contributor

@fdefelici fdefelici left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe a naive consideration, but I'll share it anyway 🙂

Using this solution (and the improved approach described in the ticket) for our release process makes total sense.

However, for images that are built only for testing purposes, we might consider skipping heavy steps (like the the full test suite) to save time. Since builds coming from develop (and master) should already be stable thanks to our CI process, a lightweight build path could be sufficient for testing scenarios.

@wileyj
Copy link
Collaborator Author

wileyj commented Oct 14, 2025

However, for images that are built only for testing purposes, we might consider skipping heavy steps (like the the full test suite) to save time. Since builds coming from develop (and master) should already be stable thanks to our CI process, a lightweight build path could be sufficient for testing scenarios.

For this specific workflow, that is indeed the case - this is a workflow designed to only run on demand, by itself, and outputs a docker image from the branch it was built from.

it's out of scope for this change, but we can discuss what i think your other point is around the release workflows - that there's no need to re-run the same tests? if that's the case, i've also had the same idea but as usual it's not that simple (we could disable some tests, but we'd still need to build the test harness for some tests that only run on release workflows. the atlas tests for example).

@fdefelici
Copy link
Contributor

For this specific workflow, that is indeed the case - this is a workflow designed to only run on demand, by itself, and outputs a docker image from the branch it was built from.

Thanks for clarifying, I was measleaded by the testrun word in the PR description, and now I realize I completly read it wrong :)

it's out of scope for this change, but we can discuss what i think your other point is around the release workflows - that there's no need to re-run the same tests? if that's the case, i've also had the same idea but as usual it's not that simple (we could disable some tests, but we'd still need to build the test harness for some tests that only run on release workflows. the atlas tests for example).

Well, this could also apply to the release side, but it really depends on what we want to achieve.
If we want to be pedantic, it would be correct to re-run all the tests, even if that code has already been covered by unit and integration tests.
At the same time, it might also be reasonable to run only the release-related tests and save time.
In this case, I don’t think there’s a single “correct” answer, it really depends on our goals!

@wileyj
Copy link
Collaborator Author

wileyj commented Oct 22, 2025

For this specific workflow, that is indeed the case - this is a workflow designed to only run on demand, by itself, and outputs a docker image from the branch it was built from.

Thanks for clarifying, I was measleaded by the testrun word in the PR description, and now I realize I completly read it wrong :)

it's out of scope for this change, but we can discuss what i think your other point is around the release workflows - that there's no need to re-run the same tests? if that's the case, i've also had the same idea but as usual it's not that simple (we could disable some tests, but we'd still need to build the test harness for some tests that only run on release workflows. the atlas tests for example).

Well, this could also apply to the release side, but it really depends on what we want to achieve. If we want to be pedantic, it would be correct to re-run all the tests, even if that code has already been covered by unit and integration tests. At the same time, it might also be reasonable to run only the release-related tests and save time. In this case, I don’t think there’s a single “correct” answer, it really depends on our goals!

fair point about re-running tests, but not the scope of this PR. it's something i've considered as well for the release workflows. since i have ongoing work there (which the issue referenced in this PR would benefit from) i can create a sub-issue (sub-issue in this parent: #6601)

@wileyj wileyj added this pull request to the merge queue Oct 23, 2025
Merged via the queue into stacks-network:develop with commit b34a70f Oct 23, 2025
302 of 307 checks passed
@wileyj wileyj deleted the chore/6556 branch October 23, 2025 18:52
@codecov
Copy link

codecov bot commented Oct 23, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 79.98%. Comparing base (483f1ad) to head (3b862b6).
⚠️ Report is 130 commits behind head on develop.

Additional details and impacted files
@@             Coverage Diff              @@
##           develop    #6595       +/-   ##
============================================
+ Coverage    69.88%   79.98%   +10.09%     
============================================
  Files          568      571        +3     
  Lines       347547   351593     +4046     
============================================
+ Hits        242887   281209    +38322     
+ Misses      104660    70384    -34276     

see 313 files with indirect coverage changes


Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 483f1ad...3b862b6. Read the comment docs.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Publish multi-platform Docker from develop

5 participants