Please folk it and maintain your own copy if you find it useful.
A set of workable platform-automation
-powered Concourse pipelines to drive PCF Platform & Tiles' install, upgrade and patch in an automated and easy way!
The highlights:
- It's an end-to-end PCF automation solution, built on top of
platform-automation
, with best practices embedded - Literally FOUR(4) pipelines only for ONE(1) foundation, with whatever products you desire
- It’s designed for multi-foundation, so rolling out to more PCF foundations would just work too;
- Compatible with GA'ed Platform Automation for PCF v3.x
Official Blog: If you're keen to know the detailed process and thoughts about how I implemented this, check out the Pivotal official blog post here.
Disclaimers:
This is NOT an official guide for building pipelines on top of
platform-automation
-- there is no such a thing yet as of writing. Instead, this is just a sharing of my (Bright Zheng) own experience while building Concourse pipelines to driveplatform-automation
for Dojos and services.Pivotal does NOT provide support for these pipelines.
- [2019-02-07] Initial release
- [2019-02-27] Added ops-files/resource-stemcell-s3.yml
- [2019-04-17] Merged
install-product.yaml
andupgrade-product.yaml
as one:install-upgrade-product.yaml
- [2019-05-05] Added selective apply changes with optional errand control mechanism
- [2019-05-31] Rebuilt the pipelines by introducing YAML templating, with full compatibility of GA'ed Platform Automation for PCF v3.x
- [2019-08-08] Added GCS support for buckets, thanks to @agregory999
- [2019-09-10] Used
semver-config-concourse-resource
v1.0.0 by default - [2019-09-24] Released v1.1.0
- Brought in more control on stemcell versions to ensure the consistency across multiple foundations
- Bumped to
semver-config-concourse-resource
v1.1.0 - Re-designed the interfaces of all helper scripts
The platform-automation
is a compelling product for driving automation within PCF ecosystem.
Overall it brings in great value which includes but is not limited to:
- We can now build pipelines to install, upgrade, and patch by simply orchestrating the tasks it offers;
- Reduce the complexity dramatically compared to
pcf-pipelines
- Let operator gain better control on PCF automation as it brings in better mechanisms
But platform-automation
is just some great building blocks, to drive real-world PCF automation, we still need to compile pipelines.
This repo is aimed to offer a set of battle-proven pipelines for you.
Literally there are just FOUR (4) pipelines for ONE (1) foundation in most of the cases, whatever products/tiles you desire to deploy and operate.
Pipeline | Purposes | Compatible for PCF Products | Pipeline YAML File |
---|---|---|---|
install-opsman | Install OpsMan & Director | ops-manager | install-opsman.yml |
upgrade-opsman | Upgrade/Patch OpsMan & Director | ops-manager | upgrade-opsman.yml |
install-upgrade-products | Install or upgrade all desired products/tiles | All products/tiles, including PAS and PKS | install-upgrade-products.yml |
patch-products | Patch all desired products/tiles | All products/tiles, including PAS and PKS | patch-product.yml |
Notes:
- To be clear, the
install-upgrade-products
andpatch-products
are not simply pipelines, they're templatized to help construct pipelines for configurable desired products in a dynamic way- This repo follows the same compatibility in terms of Concourse, OpsManager, Pivnet Resource etc. as stated in
platform-automation
, check out the docs here
The overall model can be illustrated as below:
One of the major goals of building platform-automation
is to simplify things about PCF automation.
But think of the best practices and/or sustainable processes, we should prepare some or all of below items if there is a good fit.
Here is a typical setup, for your reference:
For detailed explaination of the preparation, please refer to detailed preparation here.
To get started, we need some buckets pre-created:
platform-automation
: the bucket to hostplatform-automation
artifacts, e.g. image (.tgz), tasks (.zip)<FOUNDATION-CODE>
, e.g.prod
: one bucket per foundation is recommended for hosting the exported installation files
You may take a look at my setup, where I use Minio by the way, for your reference:
$ mc ls s3/
[2019-05-27 17:41:37 +08] 0B dev/
[2019-03-17 15:39:43 +08] 0B pez/
[2019-05-28 14:29:23 +08] 0B platform-automation/
$ mc ls s3/platform-automation/
[2019-05-28 14:29:20 +08] 412MiB platform-automation-image-3.0.1.tgz
$ mc ls s3/platform-automation/dev
[2019-05-27 12:17:23 +08] 6.7MiB installation-20190527.416.47+UTC.zip
[2019-05-27 12:32:11 +08] 6.7MiB installation-20190527.431.43+UTC.zip
[2019-05-27 17:41:37 +08] 353KiB installation-after-ops-manager-upgrade.zip
[2019-05-27 17:28:38 +08] 273KiB installation-before-ops-manager-upgrade.zip
...
You must have a configuration Git repo to host stuff like env.yml
, auth.yml
, product config and vars files.
Based on some real-world practices, below structure and naming conventions are my recommendation:
├── README.md
└── <FOUNDATION-CODE, e.g. qa>
│ ├── config
│ │ ├── auth.yml
│ │ └── global.yml
│ ├── env
│ │ └── env.yml
│ ├── generated-config
│ │ └── <PRODUCT-SLUG>.yml
│ ├── products
│ │ └── <PRODUCT-SLUG>.yml
│ ├── state
│ │ └── state.yml
│ ├── vars
│ │ └── <PRODUCT-SLUG>-vars.yml
│ └── products.yml
└── <ANOTHER FOUNDATION-CODE, e.g. prod>
To make it clearer, you can get started with only below files:
<FOUNDATION-CODE>/config/auth.yml
<FOUNDATION-CODE>/config/global.yml
<FOUNDATION-CODE>/env/env.yml
<FOUNDATION-CODE>/state/state.yml
<FOUNDATION-CODE>/products/ops-manager.yml
<FOUNDATION-CODE>/products.yml
The actual product-related files, like <FOUNDATION-CODE>/generated-config/*
, <FOUNDATION-CODE>/products/*
(expect ops-manager.yml
), <FOUNDATION-CODE>/vars/*
, can be generated, templatized/parameterized while driving through the install-opsman
, install-upgrade-products
pipelines at first time.
The typical process would look like:
- The
generate-director-config
orgenerate-product-config
job will generate the raw product config files, as<PRODUCT-SLUG>.yml
, under<FOUNDATION-CODE>/generated-config/
; - Copy it to
<FOUNDATION-CODE>/products/
, as<PRODUCT-SLUG>.yml
; - Copy it to
<FOUNDATION-CODE>/vars/
, as<PRODUCT-SLUG>-vars.yml
, by following the naming conventions; - Templatize and parameterized both files for one particular product;
- Report 1-4 for all other products, one by one;
- Run through the pipeline
Once above process is done, we have succussfully established a great baseline for rest of upgrades and patchs.
Please refer to here for required input/output files which should be versioned and managed by versioning system like Git.
For your convenience, there is already a sample Git repo for you to check out, here.
I created a Bash file for each pipeline to fly
with.
For example:
$ fly targets
name url team expiry
dev https://concourse.xxx.com dev Thu, 30 May 2019 14:37:16 UTC
$ ./3-fly-install-upgrade-products.sh -h
Usage: 3-fly-install-upgrade-products.sh -t <Concourse target name> -p <PCF platform code> -n <pipeline name> [OPTION]
-t <Concourse target name> the logged in fly's target name
-p <PCF platform code> the PCF platform code the pipeline is created for, e.g. prod
-n <pipeline name> the pipeline name
-s <true/false to specify stemcell> true/false to indicate whether to specify stemcell
-o <ops files seperated by comma> the ops files, seperated by comma, e.g. file1.yml,file2.yml
-h display this help and exit
Examples:
3-fly-install-upgrade-products.sh -t prod -p prod -n install-upgrade-products
3-fly-install-upgrade-products.sh -t prod -p prod -n install-upgrade-products -s true
3-fly-install-upgrade-products.sh -t prod -p prod -n install-upgrade-products -o ops-file1.yml
3-fly-install-upgrade-products.sh -t prod -p prod -n install-upgrade-products -o ops-file1.yml,ops-file1.yml
Using vars files is a common practice to externalize some variables for pipelines.
There are two vars files used in these pipelines:
vars-<PLATFORM_CODE>/vars-common.yml
: This is about some common configurations (e.g. Git, S3) that will be used for all the pipelinesvars-<PLATFORM_CODE>/vars-products.yml
: This is to configure the desired products that we want to deploy on PCF for pipelines ofinstall-upgrade-products
andpatch-products
For those, say s3_secret_access_key
, git_private_key
, we should store and manage them with integrated credential manager, like CredHub, or Vault.
This pipeline is dedicated for installation of OpsMan and OpsMan Director.
Sample usage:
$ ./1-fly-install-opsman.sh -t dev -p dev -n install-opsman
Note: If you want to customize the pipeline, say to use GCP instead of Minio as the blobstore, please refer to here for the out-of-the-box ops files; or simply add yours!
This pipeline is for OpsMan upgrade/patch which will of course upgrade/patch OpsMan Director as well.
Sample usage:
$ ./2-fly-upgrade-opsman.sh -t dev -p dev -n upgrade-opsman
Note: don't be surprised if the
upgrade-opsman
would run first time, after youfly
, without any version upgrade -- it's just to catch up with the desired version to have a baseline -- it wouldn't hurt the platform.
This is a templatized pipeline.
By using amazing YAML templating tool ytt
, the products can be fully configurable as desired to install and upgrade.
Sample usage:
$ ./3-fly-install-upgrade-products.sh -t dev -p dev -n install-upgrade-products
By default, the latest applicable Stemcell will be assigned to the product. But if you desire to have full version consistency, including Stemcell, across platforms, explicitly assigning stemcells to products might be a good idea:
$ ./3-fly-install-upgrade-products.sh -t dev -p dev -n install-upgrade-products -s true
Note:
- There are always groups named
ALL
andapply-changes
, but the products are fully configurable in a dymanic way;- For Stemcell assignment details, refer to here
This is also a templatized pipeline, which would respect all the setup of install-upgrade-products
but is dedicated for patch.
We shouldn't expect breaking product config changes in patch versions so this pipeline can be fully automated if you want.
$ ./4-fly-patch-products.sh -t dev -p dev -n patch-products
Similarly, you can control Stemcell assignment by doing this way:
$ ./4-fly-patch-products.sh -t dev -p dev -n patch-products -s true
Note:
- Don't be surprised if the
patch-products
would automatically run first time, after youfly
, without any version patch -- it's just to catch up with the desired version to have a baseline -- it wouldn't hurt the platform.- There are always groups named
ALL
andapply-changes
, but the products are fully configurable in a dymanic way.
There are two major configurable portions:
vars-<PLATFORM_CODE>/vars-products.yml
This is to configure the desired products to be deployed on PCF by following a simple pattern: <PRODUCT_ALIAS>|<PRODUCT_SLUG>[|<PRODUCT_NAME>]
Where:
- <PRODUCT_ALIAS>: is the product alias which can be whatever you want to alias a product. Short alias names are recommended
- <PRODUCT_SLUG>: is the official Pivotal Network's slug which can be retrieved from command line
$ pivnet products
- <PRODUCT_NAME>: is required ONLY when <PRODUCT_NAME> is different compared to "<PRODUCT_SLUG>"
For example, below configures two products, Pivotal Container Service (PKS) and Harbor:
products:
# PKS
- pks|pivotal-container-service
# Harbor
- harbor|harbor-container-registry
This is the detailed configuration about products.
Please note that the elements should be compatible with download-product
task.
Let's take PKS as an example:
products:
...
pks:
product-version: "1.4.0"
pivnet-product-slug: pivotal-container-service
pivnet-api-token: ((pivnet_token))
pivnet-file-glob: "*.pivotal"
stemcell-iaas: google
...
This is to fully embrace the idea of GitOps so we can drive changes by PR'ing and always consider the platform-automation-configuration
repo is the source of the truth.
By default, the latest version (e.g. 250.112
) of Stemcell within the applicable version range (e.g. 250.82–250.112
) will be assigned to the product, if we define our product in product.yml
like this:
products:
...
pas:
product-version: "2.5.8"
pivnet-product-slug: elastic-runtime
pivnet-api-token: ((pivnet_token))
pivnet-file-glob: "srt-*.pivotal"
stemcell-iaas: google
...
This is considered a good practice to apply the latest patched one, if possible.
The interesting thing is, the applicable version range might change, should there be any compatible new Stemcell(s) released.
So it's also a good idea to fix the Stemcell versions across platforms to have full version consistency.
To achieve that, you can do something like this:
products:
...
pas:
product-version: "2.5.8"
pivnet-product-slug: elastic-runtime
pivnet-api-token: ((pivnet_token))
pivnet-file-glob: "srt-*.pivotal"
#stemcell-iaas: google # comment this out
pas-stemcell: # follow the `<PRODUCT_ALIAS>-stemcell` naming pattern and specify Stemcell details
product-version: "250.99" # any available version from applicable Stemcell range of `250.82–250.112`, as of writing
pivnet-product-slug: stemcells-ubuntu-xenial
pivnet-api-token: ((pivnet_token))
pivnet-file-glob: "light-bosh-stemcell-*-google-kvm-ubuntu-xenial-go_agent.tgz"
...
A Newly Built Semver Config Concourse Resource
Upgrade
and patch
are different things in most of the cases.
So we need to differientiate the version change we're going to conduct is upgrade
or patch
: for upgrade
versions should be handled by install-upgrade-products
; while patch
versions should be handled by patch-products
.
Upgrade
versions may incur breaking changes.
Some products may not strictly follow the Semantic Version
conventions for some reason so we may want to have the flexibility to define what an upgrade
or patch
is.
That's why I recently built a new Concourse Resource type named Semver Config
to track the configration file for products and determine whether it's an upgrade
or a patch
.
In install-upgrade-products
, let's consider upgrade
is about any update on Major or Minor version -- I'd say it's a safer or more conservative consideration -- so we can enable the version detection pattern as m.n.*
, which simply means that I care ONLY Major(m) and/or miNor(n) changes
.
For example, below scenarios are now considered as upgrade
:
- Spring Cloud Services:
2.0.9
->3.0.0
- PAS:
2.3.4
->2.4.9
For patch
, it's literally about the patch version, like 2.4.1
-> 2.4.10
, so we enable the version detection pattern as *.*.p
Please note that from first version check, where the version might be null
and then converted as 0.0.0
, we have to build the version baseline so it will still trigger once even it's on *.*.p
.
Ops File | Applicable To Pipelines | Purpose |
---|---|---|
resource-platform-automation-tasks-git.yml | ALL | To host platform-automation tasks in Git repo for necessary customization. Please note that it's NOT recommended as it may break the upgrade path for platform-automation |
resource-trigger-daily.yml | ALL | To enable trigger for one specific job, by setting varaible of ((job_name)) , on daily basis |
resource-trigger-onetime.yml | ALL | To enable trigger for one specific job, by setting varaible of ((job_name)) one time only |
resource-gcs.yml | ALL | To switch from S3 to Google Cloud Storage for Platform Automation image and tasks as well as installation exports |
task-configure-authentication-ldap.yml | Install OpsMan Pipeline | To configure OpsMan authentication with LDAP/AD |
task-apply-changes.yml | ALL Product Pipelines | To enable selective apply changes with errand control. For experiment only, use with caution! |
So how to use these ops files?
Let's say you want to customize the install-opsman
pipeline so that you can enable LDAP, instead of internal, for authentication:
$ ./1-fly-install-opsman.sh dev dev install-opsman \
ops-files/task-configure-authentication-ldap.yml
In most of the cases, customization can be handled by applying ops-files, be it from above list or your own.
But sometimes, field engineering might be more crazy/aggressive/demanding/time-sensitive than product engineering so you may want to introduce more features to address some specific concerns by doing something -- other than actively sending feedback to product teams, you may think of some way of customization.
But rule no.1 is that, whatever you do, don't break the upgrade path!
Platform automation is built on top of two major CLIs: p-automator
and om
.
And there are a series of fine-grained Concourse
tasks built on top of them with standardized inputs/outputs, plus very minimum bash scripts.
So adding your tasks might be a potential area that we may think of in terms of customization.
For example, I did one to enable selective apply changes in platform automation especially when the platform has grown to some level with more and more products/tiles.
Below is my experiment, for your reference:
- As I'm using Git to host tasks unzipped from
platform-automation-tasks-*.zip
, create another folder namedcustom-tasks
; - Copy my custom task
apply-changes.yml
into it and check it in; - Compile a simple ops-file
ops-files/task-apply-changes.yml
; fly
the pipeline with this ops-file enabled:
$ ./3-fly-install-upgrade-products.sh dev dev install-upgrade-products \
ops-files/task-apply-changes.yml
- The bonus is, you can control the errands as well by compiling an errand control config file
errands.yml
in/errands
folder in yourplatform-automation-configuration
repo, like the samples here.
Concourse Server (Required)
It's of course required if we're working on Concourse pipelines.
And this is exactly what this repo is built for: platform-automation
-powered Concourse pipelines
Note: Using other CI/CD platform is totally possible too, but it's NOT the scope of this repo.
Git Service (Required)
Git service is required to host some stuff like products' config files.
It's also possible to host the platform-automation
tasks if you really want to further customize them.
Please note that it may break the upgrade path of platform-automation
so think twice before doing this.
Gogs might be a good candidate while on-prem, or simply use any public ones, like GitHub -- don't forget, private repos are possible now, for free:)
S3 Blobstore (Required in air-gapped environment)
S3 blobstore is required in air-gapped environment to host a lot of things like artifacts.
And it's an ideal place to host the platform-automation-image
if Docker Registry is not available.
The pipelines also use S3 blobstore for exported installation settings -- the installation-*.zip
files.
Private Docker Registry (Optional)
Private Docker Registry is optional.
It makes sense only when you want to host the platform-automation-image
or other custom Concourse resource types which are typically Dockerized.
Some Client-side Tools
Below tools are required in your laptop or the workspace:
- Concourse
fly cli
- yaml-patch for patching pipelines with ops files, if required
- ytt, an amazing YAML templating tool for dynamically generating
install-upgrade-products
andpatch-products
pipelines as the desired products might vary.