Part of the Multi-team Software Delivery Assessment (README)
Copyright © 2018-2021 Conflux Digital Ltd
Licenced under CC BY-SA 4.0
Permalink: SoftwareDeliveryAssessment.com
Based on:
- the key practices in the book Continuous Delivery by Jez Humble and Dave Farley
- the book Continuous Delivery with Windows and .NET by Matthew Skelton and Chris O'Dell
- the summary of the Continuous Delivery practices at CDchecklist.info.
Purpose: Assess the awareness and performance of the team regarding key Continuous Delivery practices
Method: Use the Spotify Squad Health Check approach to assess the team's answers to the following questions, and also capture the answers:
Question | Tired (1) | Inspired (5) |
---|---|---|
1. Release Candidate - Every Check-in Leads to a Potential Release (Chapter 1) | We have special "Release Candidate" builds occasionally | Any check-in can generate a build that might go to Production without a further build |
2. Done - Done Means Released [that is, released into Production and not causing problems] (Chapter 1) | Our Definition of Done means "feature tests have passed" | Our Definition of Done means that the changes are deployed to Production with monitoring to ensure they have not broken anything |
3. Automated Config - Configuration should always be performed by automated processes using values taken from your configuration repository (Chapter 2) | Many of our applications and tests are configured manually each time | All configuration is done using scripts |
4. Config Options - It should be easy for anyone to see what configuration options are available for a particular version of an application across all environments it will be deployed into. (Chapter 2) | We have to run diffs against different files, some in version control, some taken from live servers | We have an API or UI to show the config options deployed to any environment |
5. Broken Builds - Don’t Check In on a Broken Build (Chapter 3) [except to fix the broken build!] | We cannot easily know when our team has broken a build | We guard the build with care and never check-in on a broken build |
6. Failing Tests - Don’t Comment Out Failing Tests (Chapter 3) | We switch off failing tests to get the build or pipeline working | We trust our tests; if the tests are failing, then something is wrong and we go and fix it |
7. Binaries - Only Build Your Binaries Once [no special 'release candidate' builds] (Chapter 5) | We have multiple different builds and then merge to create the final Release Candidate | We have only a single build to produce a binary artifact which then gets promoted through all environments with no additional merging or building needed |
8. Stop The Line - If Any Part of the Pipeline Fails, Stop the Line [everyone stops feature work and fixes the problem] (Chapter 5) | The pipeline fails so often that it is difficult to know which team broke the build | If the pipeline fails, it is very clear which team is responsible and so we stop our work immediately to fix the problem |
9. Idempotent Deployment - Ensure the Deployment Process Is Idempotent [we can deploy the same version repeatedly with the same result] (Chapter 6) | It is difficult to get repeatable deployments | We can redeploy the same version many times with the same result |
10. Stubs - Use Stubs to Simulate External Systems [treat almost every other system as 'external'!] (Chapter 8) | There are few stubs available and we do not have enough time to write the stubs ourselves | The stubs we consume and write are good quality and give us a high degree of confidence that our tests are working well |
11. API Replay - Record Interactions against a Service or Public API (Chapter 9) | We have no way to record requests/responses from a remote API | We record key requests/responses from remote APIs which we use to build high-fidelity integration tests |
12. Blue-Green - Use Blue-Green Deployments [at a granular level] (Chapter 10) - this means any mechanism that allows you to test a new version alongside an existing version and rollback to the older version if necessary | We do not use any blue-green deployment techniques | We use fine-grained blue-green deployment techniques - at the level of individual services |
13. Environment History - It should be possible to see a history of changes made to every environment, including deployments. (Chapter 11) | It is difficult to see the history of changes in an environment | We have a nice dashboard or log of changes to every environment |
14. DB Changes - Decouple Application Deployment from Database Migration [and from other data-rich services] (Chapter 12) - this relates to shared databases | We must deploy our application or service together with the database or data layer | Our application or service is completely decoupled from the underlying database or data layer |