-
-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CI integration #190
Comments
So an update on this. I got a CI/CD solution up and running using TeamCity currently running on my homeserver. I can share access to the AOS team from matrix, and if you like it, feel free to migrate the settings over. Current implementation
CommentsAlthough everything seems to be working as intended, there are a few comments I would need some feedback on:
|
Organizational changes needed
The reasoning is that currently the This is all for the Continuous Integration workflow. Sanity check builds should still be done infrequently (weekly or more) or on demand, since these take about EDIT: |
Meta-qt5 uses SailfishOS repositories. A while back SailfishOS transferred all their repositories from their own hosting to GitHub. This required a change in meta-qt5. This change was however pushed to the master branch instead of the honister branch. Using the master branch directly can cause compatibility issues with the build system, since it's more frequently updated to support the latest Yocto version. This is why we fixed it to a specific commit (this is the relevant commit: bd194d1). edit: It seems that |
I want to share and discuss some ideas on how a CI can be implemented for AsteroidOS. I do not have docker experience, and most of this will be assuming to be run on local runners.
Usecase
Recently I found issues with the
catfish
recipe due to incompatible component versions. This was not caught due to caching when creating the new recipe. Having a CI that can build all of the configurations from scratch and occasionally check the heads of all of the dependent components would circumvent this issue. Additionally this can be used to manage a publicSSTATE_CACHE
server so others can more easily contribute.Design
The design here would be to use two CIs, a slow
SSTATE_CACHE
creator and a regular CISSTATE_CACHE
CIbitbake
supports TeamCity UI output. There are no limitations on the runtime of the builds.rsync
and SSH keys. Relevant docs about theSSTATE_CACHE
server here and here. These can be hosted e.g. at sstate-cache.asteroid.org as a simple file server. This should also be able to be distributed via S3, but there are no clear guides on how to do so.SSTATE_DIR
folder or directly overwrite the hosted ones.Regular CI
SSTATE_CACHE
server viaSSTATE_MIRRORS
in the configurationChanges to be made
There are not many changes needed to be made to get this running apart from setting up the CIs and file server:
SSTATE_MIRRORS
option to the local.conf option, unless a flag of--from-scratch
is passed.prepare_build.sh
so that it can fetch specific remotes and branches of each component.bitbake
's TeamCity UI output.Let me know what you guys think. I've done quick tests on a local instance of TeamCity and it seems to handle ok.
The text was updated successfully, but these errors were encountered: