A Jenkins Pipeline Shared Library allows to re-use code blocks in different pipelines, to use Java class approach to data and logic implementation, as well as to store global static state data for the duration of a pipeline script execution.
Its source repo has a predefined structure, and has to be registered on the Jenkins controller to be trusted, and the pipeline scripts can then include it like this:
@Library ('org.nut.dynamatrix') _
With jenkinsci/pipeline-groovy-lib-plugin#19
feature used in the build of the @Library
implementation running on
the Jenkins server, it is also possible to request "same branch of the
library as the branch of pipeline script" with fallback to configured
default branch, to avoid superficial differences between pipeline code
branches, like this:
@Library ('org.nut.dynamatrix@${BRANCH_NAME}') _
Keep in mind that this is not a Groovy variable, just intentionally
looks like one for simpler code maintenance. The @Library
annotation
is processed before compilation of pipeline script to Groovy and Java.
Note
|
Obsoleted Note (valid for unmodified Jenkins servers):
The For this reason, dev/test version of the code of the library from dev/test versions of its consumers had to be named explicitly, like this: @Library ('org.nut.dynamatrix@fightwarn') _ |
This project is intended as a relatively generic solution for Jenkins pipelines to build and test the codebase in a variety of environments (such as operating systems, compiler brands/versions or programming language revisions) and setups (such as the sets of warnings marked to be fatal vs. ones that are still tolerated) based dynamically on what capabilities are supported by the workers of the build farm.
These workers may be not necessarily managed or provisioned by the same administrator or entity as manages the build farm, but can be e.g. a Swarm of specially labeled agents provided by contributors of a FOSS project — which is especially of interest to such users in order to have their less common system or setup supported by the FOSS project they like and use, or to contribute access to architectures that the project does not own.
For projects that have a minimal baseline of scenarios that must pass, this information can be conveyed by the configuration for the pipeline, so that dynamic workers matching those requirements can be spun up and later deactivated or destroyed to conserve run-time resources of the centralized build farm. This allows to sequentialize builds of varied operating environments hosted on systems with limited memory.
This library, or generally persistent CI farms, can also take advantage of https://github.com/jimklimov/git-refrepo-scripts and a (currently custom) build of Git Client Plugin with the changes proposed in PR jenkinsci/git-client-plugin#644 to maintain reference repositories on the CI server, and so reduce the traffic and time needed for future Git check-outs.
Dynamatrix provides a Shared Pipeline library, so that the pipeline definitions in actual source code repositories can be kept compact and not need to update their Jenkinsfiles whenever this idea evolves.
Build agent definitions, whether static (e.g. SSH Agents) or dynamic (e.g. Swarm Client agents) can declare labels which can be matched in the pipeline stages. The Dynamatrix is layered on top of structurally named labels to generate a list of possible build combinations as a boolean label expression, and so dynamically generate the payload structure of each build.
A sibling jenkins-swarm-nutci project is used on build agents which can dial in to the NUT CI farm and provide their services and capabilities (conveyed by the labels which are part of their configuration), and includes some scripting for common configuration approach as well as for automated start-up on numerous platforms. It might be usable "as is" or easily adapted by other projects.
Since the same CI farm and Swarm agent workers can be used for the benefit of several FOSS projects, one of the label patterns allows to declare that this worker is suitable for (and contributes to) builds of a particular project.
This library is designed to provide a number of classes for larger
pieces of code as separate compiliation units (my first stint with
standard pipeline matrix{}
failed due to resulting method binary
exceeding 64K) and as separate callable routines. While it may be
helpful to directly use prepareDynamatrix()
in some cases, many
practical jobs would define a dynacfgBase
with common dynamatrix
properties for a suitable node and a dynacfgPipeline
with certain
fields such as the closure(s) to run dozens of times, and pass that
to a dynamatrixPipeline(dynacfgBase, dynacfgPipeline)
call that
implements the pipeline in a reusable fashion.
In the illustrative example below, two workers are provisioned and based on their installed tools, they declare the following agent labels relevant to the dynamatrix:
-
"testoi":
COMPILER=CLANG COMPILER=GCC CLANGVER=8 CLANGVER=9 GCCVER=10 GCCVER=4.4.4 GCCVER=4.9 GCCVER=6 GCCVER=7 OS=openindiana ARCH=x86_64 ARCH=x86 nut-builder zmq-builder
-
"testdeb":
COMPILER=GCC GCCVER=4.8 GCCVER=4.9 GCCVER=5 GCCVER=7 OS=linux ARCH=x86_64 ARCH=x86 ARCH=armv7l nut-builder zmq-builder linux-kernel-builder
A Jenkinsfile in a project using this library would look like:
@Library ('dynamatrix') _ // Prepare parallel stages for the dynamatrix: def parallelStages = prepareDynamatrix( commonLabelExpr: 'nut-builder', compilerType: 'C', compilerLabel: 'COMPILER', compilerTools: ['CC', 'CXX', 'CPP'], dynamatrixAxesLabels: ['OS', '${COMPILER}VER', 'ARCH'], dynamatrixAxesCommonEnv: [['LANG=C', 'TZ=UTC'], ['LANG=ru_RU']], dynamatrixAxesCommonOpts: [ ['"CFLAGS=-stdc=gnu99" CXXFLAGS="-stdcxx=g++99"', '"CFLAGS=-stdc=c89" CXXFLAGS="-stdcxx=c++89"'], ['-m32', '-m64'] ], dynamatrixRequiredLabelCombos: [[~/OS=bsd/, ~/CLANG=12/]], allowedFailure: [[~/OS=.*/, ~/GCCVER=4\..*/], ['ARCH=armv7l'], [~/std.*=.*89/]], excludeCombos: [[~/OS=openindiana/, ~/CLANGVER=9/, ~/ARCH=x86/]] ) { unstash 'preparedSource' sh """ ./autogen.sh """ sh """ ${dynamatrix.commonEnv} CC=${dynamatrix.COMPILER.CC.VER} CXX=${dynamatrix.COMPILER.CXX.VER} ./configure ${dynamatrix.commonOpts} """ sh """ make -j 4 """ sh """ make check """ } pipeline { // List some unique options, build parameters, and build scenarios // like spell checking or docs generation that you do not need to // run dozens of times using every agent. agent { label 'persistent-worker' } stages { stage('Prepare source') { steps { // The persistent agent may be in better position to // e.g. use a Git reference repository for faster // checkouts, or just to have the internet access // which CI farm workers may lack. checkout scm stash 'preparedSource' } } stage('Spell Check') { steps { sh """ aspell ... """ } } stage('Make docs') { agent { label 'docs-builder' } steps { unstash 'preparedSource' sh """ make pdf """ } } } } parallel parallelStages
With this configuration, the Dynamatrix should detect the running agents and know their capabilities, so it is in position to prepare a series of builds covering every available OS and compiler version and CPU architecture.
It can optionally be filtered through constraints, such as that we
do not even want to try building a combination described by (matching)
the skip
option, that we require to run some combination(s) even if
an agent for that is not currently running so labels are not detected
(things can hang in queue waiting for a worker, or can cause spinning
up a build agent if it is configured but dormant), or that some certain
build setups may fail (e.g. we wonder how they fare, but they are not
a required baseline and so not blockers for a merge) so their results
would not be impacting the overall job verdict.
For certain compiler toolkits (e.g. C family) it would provide an automatic preparation of variables for several same-versioned tools (e.g. C and C++ compilers).
Depending on their implementation and connectivity, build agents may have different preferences, and this library allows to tune them with their node labels.
One such area is delivery of tested codebase to the agents: default approach which is to stash on master, and unstash on agents, should be reliable (should reach agents that can not use the SCM platform directly, and should ensure all agents test the same revision even if it disappears from the SCM platform — by e.g. force-push to a PR), but at a cost of repetitive traffic and I/O to unstash same code time and again (often on same machine) during a matrix build.
Setting DYNAMATRIX_UNSTASH_PREFERENCE
to scm-ws
, scm
or unstash
in the individual agent labels allows that system to start with either
an SCM checkout augmented by a Git reference repository (persistent) in
the workspace and maintained during each run (this currently requires
a custom build of Jenkins Git Client plugin including the feature from
jenkinsci/git-client-plugin#644 unless/until
it gets properly merged); or using a plain SCM checkout; or unstashing.
These methods fall-back from one to next in the order listed above.
The DynamatrixStash methods dealing with code checkout, stashing and
unstashing, allow a concept of stashName
used to identify archives
as well as to track metadata for that codebase (so the same pipeline
can mix several repositories). Preferences for each repository can
be tailored, using e.g. scm:githubProject
and unstash:privateRepo
label values to use different delivery methods for the two stashNames.
To prevent several parallel jobs and build scenarios from corrupting
the reference repository maintained in the workspace, maintenance of
this location is protected by Lockable Resources plugin. Since agents
running with independent storage should not wait on each other, this
lock can be tuned by setting DYNAMATRIX_REFREPO_WORKSPACE_LOCKNAME
label; note that agents that do indeed use same storage (shared over
NFS, or using containers with same homedirs from their host) should
set identical values in their common lock name.
This is a Jenkins Shared Library. As such, it has some required file system structure including:
-
vars/
— the "groovy variables" which are, at least in this context, sources for single-use class instances and their methods that can be called from each other or from the pipeline which uses the lib, and as far as the pipeline is concerned,call()
methods in these groovy files are custom "steps" (named same as the file); -
src/
— formal classes including ones that can be static, such as to store some persistent configuration for the run.
There are further standard structure points that we do not currently use, such as location and naming of documentation to accompany the declared steps so this can be displayed by Jenkins UI, and location for resources such as shell scripts and arbitrary data used by JSL.
Approach recommended in some of the articles linked below is that the
logic is mostly (ideally all) in src/
classes, while the vars/
steps only wrap calls to that.
Practice, especially during early development iterations, may be mixed.
-
https://www.jenkins.io/blog/2020/10/21/a-sustainable-pattern-with-shared-library/ - it provides a useful pattern allowing a default configuration for a generic librarly build recipe implementation to be merged with options desired for a particular pipeline’s build, including an OOP-style selection of build method based on files present in the specific repo. This way whatever looks similar on some level of abstraction is handled the same way, and whatever really differs has the hooks and hacks for that individuality.
-
https://github.com/jenkins-infra/pipeline-library/blob/master/vars/buildPlugin.groovy - this code orchestrating standard builds of Jenkins plugins manages a similar matrix, optionally based on build parameters
-
https://bmuschko.com/blog/jenkins-shared-libraries/ - goes into the much welcome and somewhat gritty detail about using classes instead of "vars" used quickly as steps, which is what most of the other articles focus on
-
https://www.linkedin.com/pulse/jenkins-shared-pipeline-libraries-custom-runtime-delgado-garrido - a pattern for configs in component sources that can tune behavior of otherwise standardized library pipelines and/or maintain a Singleton with config (and other) data during the run
-
https://www.linkedin.com/pulse/jenkins-global-shared-pipeline-libraries-real-unit-delgado-garrido - and another pattern for keeping real logic hidden in classes, frontended by steps in "vars" folder
Good explanatory articles with varied detail; many other texts seem to tell the same things differently while reasonably assuming a non-beginner level from the reader. Some of those below may be a bit too long and chewing the basics delicately — but sometimes that is really a good thing:
-
https://www.lambdatest.com/blog/use-jenkins-shared-libraries-in-a-jenkins-pipeline/
-
https://medium.com/@werne2j/jenkins-shared-libraries-part-1-5ba3d072536a
-
https://medium.com/@werne2j/how-to-build-your-own-jenkins-shared-library-9dc129db260c
-
https://medium.com/@werne2j/unit-testing-a-jenkins-shared-library-9bfb6b599748 - article about testing with maven and Jenkins-Spock library
-
https://medium.com/@werne2j/collecting-code-coverage-for-a-jenkins-shared-library-c2d8f502732e
-
https://medium.com/disney-streaming/testing-jenkins-shared-libraries-4d4939406fa2 - article about testing with gradle and Jenkins Pipeline Unit library
-
https://dev.to/kuperadrian/how-to-setup-a-unit-testable-jenkins-shared-pipeline-library-2e62 - article about testing with gradle, mockito and IntelliJ IDEA integration and injectable contexts
Standard reading library:
IntelliJ IDEA setup as the IDE for Jenkins-related contents, and creation of a Gradle project for easier maintenance and testing of Jenkins Shared Pipeline Libraries followed these articles:
-
http://tdongsi.github.io/blog/2018/02/09/intellij-setup-for-jenkins-shared-library-development/
-
https://github.com/mkobit/jenkins-pipeline-shared-libraries-gradle-plugin
-
https://github.com/mkobit/jenkins-pipeline-shared-library-example
-
https://stackoverflow.com/questions/53363828/jenkins-shared-library-with-intellij
Random example self-tests:
This library is tested with the help of "mkobit" plugins referenced above.
Tests are located in the tests/
directory and implement example pipelines
which are executed in either a temporary Jenkins environment provided by a
JenkinsRule
environment (from jenkins-test-harness
project) or by a more
dedicated server behind a RealJenkinsRule
. They can be executed by an IDE
(e.g. press F9
in IntelliJ IDEA) or by ./gradlew integrationTest
.