Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI integration #190

Open
LecrisUT opened this issue Feb 1, 2022 · 3 comments
Open

CI integration #190

LecrisUT opened this issue Feb 1, 2022 · 3 comments

Comments

@LecrisUT
Copy link

LecrisUT commented Feb 1, 2022

I want to share and discuss some ideas on how a CI can be implemented for AsteroidOS. I do not have docker experience, and most of this will be assuming to be run on local runners.

Usecase

Recently I found issues with the catfish recipe due to incompatible component versions. This was not caught due to caching when creating the new recipe. Having a CI that can build all of the configurations from scratch and occasionally check the heads of all of the dependent components would circumvent this issue. Additionally this can be used to manage a public SSTATE_CACHE server so others can more easily contribute.

Design

The design here would be to use two CIs, a slow SSTATE_CACHE creator and a regular CI

SSTATE_CACHE CI

  • Example of compatible CI service: Self-Hosted TeamCity is free for 3 builders and 100 configurations (could be improved if applying for FOSS license), also bitbake supports TeamCity UI output. There are no limitations on the runtime of the builds.
  • It has to have write access to a file server hosting, e.g. using rsync and SSH keys. Relevant docs about the SSTATE_CACHE server here and here. These can be hosted e.g. at sstate-cache.asteroid.org as a simple file server. This should also be able to be distributed via S3, but there are no clear guides on how to do so.
  • Managed locally by Asteroid similar to how the current nightly build server is managed.
  • Possible builds for each configuration (device) from scratch when:
    • A new asteroid version is pushed (see further notes below)
    • A new major/minor version of components (e.g. QT) is pushed
  • This CI would update the Heads of the components and check for any incompatibilities between them and the asteroid recipes
  • Upon successful builds, either package the SSTATE_DIR folder or directly overwrite the hosted ones.

Regular CI

  • Uses the public SSTATE_CACHE server via SSTATE_MIRRORS in the configuration
  • Any CI service would be compatible, including individual user's personal setup
  • Triggered by regular PRs to the meta repositories
  • This packages and distributes the nightly builds

Changes to be made

There are not many changes needed to be made to get this running apart from setting up the CIs and file server:

  • Add a SSTATE_MIRRORS option to the local.conf option, unless a flag of --from-scratch is passed.
  • Gitmodulerize the metas and components, and label releases of AsteroidOS as a whole. This is more for esthetics reasons, but some CIs might have tools to track individual gitmodules as well. Alternatively, improve the prepare_build.sh so that it can fetch specific remotes and branches of each component.
  • Provide and test build instructions that do not rely on script source, e.g. I suspect that that breaks bitbake's TeamCity UI output.

Let me know what you guys think. I've done quick tests on a local instance of TeamCity and it seems to handle ok.

@LecrisUT
Copy link
Author

LecrisUT commented Mar 5, 2022

So an update on this. I got a CI/CD solution up and running using TeamCity currently running on my homeserver. I can share access to the AOS team from matrix, and if you like it, feel free to migrate the settings over.

Current implementation

  • Device specific projects:
    • Rebuild all asteroid-image and upload to an sstate-cache server (message on matrix for the url)
      • Triggered by a weekly schedule
      • Upon commit comments with [TC-Rebuild] or [TC-Rebuild:$MACHINE]
    • Build image from sstate-cache server
      • Triggered on any commit to the meta-$MACHINE-hybris
    • Should be possible to have device specific packages, but there are not examples to configure yet
  • Package projects (currently for MACHINE=qemux86):
    • Build package from sstate-cache server
      • Currently triggering on any commit of the dependent git roots
      • Should be manually edited to catch appropriate dependencies
  • Migrate all repositories
  • Tests

Comments

Although everything seems to be working as intended, there are a few comments I would need some feedback on:

  • This line in prepare-build.sh indicates that a specific meta-qt5 is used/preferred. Why is that exactly? I am using head for the CI and so far it does not give me any errors.
  • I don't know exactly how to pin a specific commit in TeamCity, but it should be doable with a manual git command in the script.
  • Are there any cross dependencies between the devices metas? Currently each device project only checkout their own meta
  • Can we add a hello-world example that includes bitbake recipes. For production case the meta-asteroid and meta-asteroid-community work well to catch the recipes, but it would also be useful for the CI to have a fully independent example so that user can test it in their own CI server

@LecrisUT
Copy link
Author

LecrisUT commented Mar 6, 2022

Organizational changes needed

  • A skeleton machine based on the CPU architecture from which each smartwatch inherits
  • Separate the asteroid layer into:
    • asteroid-base containing all backend recipes that the meta-smarwatch builds upon (layer 7)
    • asteroid-core containing all basic applications (maybe just the frontends?) independent of any smarwatch (layer 9 or maybe 8?)
  • Reorder the meta-smarwatch layer to (layer 8) and similar to other
  • Add a target that builds everything up to layer 8, and if possible also one for only layer 7

The reasoning is that currently the sstate-cache for a single machine is 4GB which is too big to manage properly. Mixing caches has caused some reproducibility issues, which started this whole CI journey. The CI workflow is to build layer 7 using the skeleton machines, then build layer 8 for each watch and layer 9 for each app and skeleton machine (assuming there are no watch specific app otherwise it would be in meta-smartwatch), then build asteroid-image for each watch. Between each of these 3 steps, the sstate-cache is updated with the relevant builds, each one stored in skeleton/sstate-cache or smartwatch/sstate-cache (relevant name replaced) on the server.

This is all for the Continuous Integration workflow. Sanity check builds should still be done infrequently (weekly or more) or on demand, since these take about 3h a piece, compared to 6min using the cache on the same machine.

EDIT: Tests indicate that more than 1 SSTATE_MIRRORS does not resolve properly, i.e. if any of them fails, they all fail False alarm those were natives

@MagneFire
Copy link
Member

MagneFire commented Mar 6, 2022

So an update on this. I got a CI/CD solution up and running using TeamCity currently running on my homeserver. I can share access to the AOS team from matrix, and if you like it, feel free to migrate the settings over.

Current implementation

  • Device specific projects:
    • Rebuild all asteroid-image and upload to an sstate-cache server (message on matrix for the url)
      • Triggered by a weekly schedule
      • Upon commit comments with [TC-Rebuild] or [TC-Rebuild:$MACHINE]
    • Build image from sstate-cache server
      • Triggered on any commit to the meta-$MACHINE-hybris
    • Should be possible to have device specific packages, but there are not examples to configure yet
  • Package projects (currently for MACHINE=qemux86):
    • Build package from sstate-cache server
      • Currently triggering on any commit of the dependent git roots
      • Should be manually edited to catch appropriate dependencies
  • Migrate all repositories
  • Tests

Comments

Although everything seems to be working as intended, there are a few comments I would need some feedback on:

  • This line in prepare-build.sh indicates that a specific meta-qt5 is used/preferred. Why is that exactly? I am using head for the CI and so far it does not give me any errors.
  • I don't know exactly how to pin a specific commit in TeamCity, but it should be doable with a manual git command in the script.
  • Are there any cross dependencies between the devices metas? Currently each device project only checkout their own meta
  • Can we add a hello-world example that includes bitbake recipes. For production case the meta-asteroid and meta-asteroid-community work well to catch the recipes, but it would also be useful for the CI to have a fully independent example so that user can test it in their own CI server

Meta-qt5 uses SailfishOS repositories. A while back SailfishOS transferred all their repositories from their own hosting to GitHub. This required a change in meta-qt5. This change was however pushed to the master branch instead of the honister branch. Using the master branch directly can cause compatibility issues with the build system, since it's more frequently updated to support the latest Yocto version. This is why we fixed it to a specific commit (this is the relevant commit: bd194d1).

edit: It seems that meta-qt5 has updated the honister branch (https://github.com/meta-qt5/meta-qt5/commits/honister). So it should be fine to move back to the honister branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants