-
Notifications
You must be signed in to change notification settings - Fork 213
Refactor the verify package to not have dependencies on CVO #279
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor the verify package to not have dependencies on CVO #279
Conversation
The verify package should be usable from multiple clients. Refactor it to separate dependencies on the payload in preparation for a move to an external library.
Keep a small cache of digests to verified signatures to avoid temporary errors in accessing the content store. Add a Signatures() field that allows us to access these signatures. This will allow us to record the state of the cache into the cluster at intervals.
pkg/verify/verify.go
Outdated
| // DefaultClient uses the default http.Client for accessing signatures. | ||
| var DefaultClient = simpleClientBuilder{} | ||
|
|
||
| // simpleClientBuilder implements the ClientBuilder interface and may be used for testing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why add DefaultClient to package api?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So test packages in others can use it. I had to use it in 2-3 packages that wouldn't be in verify (and might not be in library go). It's a standard implementation.
| name string | ||
| update *payload.Update | ||
| want bool | ||
| wantErr bool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Neither want nor wantErr are very specific about what came back. Can we make wantErr a regexp or string matching the expected error message? And want something about the set of loaded keys?
| return fmt.Errorf("unable to locate a valid signature for one or more sources") | ||
| } | ||
|
|
||
| v.cacheVerification(releaseDigest, signedWith) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why cache signedWith? Seems like this is a boolean cache "yes, we verified that digest". Possibly interesting metadata would include the signatures' and signing keys' expiration dates, so we knew when to invalidate the cache.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
because we're going to start saving these in a config map on cluster in the future so if the signing servers go down we can still upgrade to a version we saw in available versions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
because we're going to start saving these in a config map on cluster in the future...
I still don't see why that means "cache the sigs in memory". Can we leave this off now and grow it back when we add the cache-to-ConfigMap functionality?
|
|
||
| // Signatures returns a copy of any cached signatures that have been validated | ||
| // so far. It may return no signatures. | ||
| func (v *ReleaseVerifier) Signatures() map[string][][]byte { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would you want all of these? I think Verify is a sufficient API. If this is "which sigs will oc have to pack into a ConfigMap for disconnected folks?", I'd rather have:
func (v *ReleaseVerifier) Signatures(digest string) [][]byteor some such.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I want to get all of them in bulk at once for saving.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I want to get all of them in bulk at once for saving.
How about configuring a helper that gets called on signatures when they are added to the cache? And another that is called when they are evicted? Then your mirror case or "protect from the sig store going down" case could be addressed by attaching a collecter to the added-to-cache callback. And you'd have a way to evict ConfigMaps in the "protect from the sig store going down" case when we dropped them from the in-memory cache.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a controller feels super heavy weight. Why do we need something more complex than "get" and "write"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need something more complex than "get" and "write"?
Maybe those are fine. But I like decoupled logic so the verifier can say "I verify sigs, and if you want to bolt a cache or whatever on, here's where you hand over your callbacks". Then the code that handles verification is all nice and compact, and the code that handles caching can live somewhere else all nice and compact, and the package that wants caching verification can glue the two together to get what they want.
| klog.V(4).Infof("signature %q is not valid: %v", path, err) | ||
| continue | ||
| } | ||
| delete(remaining, k) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's some contention between efficiently verifying a digest, where you only need one sig for each keyring, and shipping sigs around, where you may want too keep walking to find the "best" valid sig for each keyring. Best might be "all valid sigs" or "latest expiration date" or "strongest cypher" or whatever. Do we need a mode toggle to switch between efficient verification and interesting-sig collection?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we get into that model, we could do that. But I think that's cart before horse, and if we need it we would have the cache function potentially manage that.
| if len(v.signatureCache) < maxSignatureCacheSize { | ||
| break | ||
| } | ||
| delete(v.signatureCache, k) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is pretty basic, and could lead to flapping two recent digests while we hold a bunch of ancient digests in the cache. Is there some LRU thing we can pull off a shelf?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess what we want is time aware least recently used (TLRU) eviction, with the time component being min(sig expiration, key expiration).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Map walk is random. If you get flapping you should go play the lottery and retire on it.
I don't think we need anything more than random.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From a 2013 Go comment:
Don't rely on the ordering for ranges over maps. Relying on it to be random is just as
bad as relying on it to be sorted, or repeatable, or whatever. Making it random is an
attempt to prevent people from relying on the order, not an invitation to rely on
the order for randomness.
Is that still current? The spec has:
The iteration order over maps is not specified and is not guaranteed to be the same from one iteration to the next.
without committing to a particular quality of randomness.
Used by the cvo unit tests.
Enable the verifier to lookup signatures by digest. A caller may create a copy of a ReleaseVerifier via WithStores(...) to include one or more sources for those signatures. Errors on retrieval of signature content are logged. Rename the current `store` field to `location` to better distinguish the two types.
As a cluster verifies signatures for release digests, flush those to an on cluster config map that ensures that even if the remote endpoint is down the cluster can still verify those signatures.
|
I pushed the two commits that write back objects to the cluster, gives some better context for the existence of those methods. |
|
/retest |
|
integrated some feedback and responded to others, PTAL (including the newer commits so you can see why I had those methods) |
| stores []*url.URL | ||
| verifiers map[string]openpgp.EntityList | ||
|
|
||
| locations []*url.URL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why keep the URI slice? Can't we create just create SignatureStores for these in NewReleaseVerifier? Or drop locations from NewReleaseVerifier and have NewFromConfigMapData (or whatever we end up calling that) create the stores for each URI and call WithStores.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've pushed up a verify-store-api branch (currently at 0cc3e41) to do this, polish the APIs up a bit more (e.g. using a callback so sig-store consumers can cancel sig iteration), and decompose things into more, smaller subpackages. Still need to go through and rework the persistence logic to use my preferred add/evict callbacks, but the API changes I already have in my branch are fairly orthogonal to that, so they should be ready for review now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've pushed up a verify-store-api branch...
Spun this out into the parallel #294.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Picking up this change, or not, does not appear to be hugely impactful especially if we consider that much of this code may be OBE post 4.4 if we deploy a local cincy. Since this is holding up other 4.4 changes we need to make a decision. I would expect that this new code will be owned by the OTA team. Can we therefore move the final approval to that team and @LalatenduMohanty
NOTE: Since @wking is on vacation, created this PR to continue work here. Stubbing out how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. CC @wking, @LalatenduMohanty, @smarterclayton, @deads2k
NOTE: Since @wking is on vacation, created this PR to continue work here. Stubbing out how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. CC @wking, @LalatenduMohanty, @smarterclayton, @deads2k
NOTE: Since @wking is on vacation, created this PR to continue work here. Stubbing out how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. CC @wking, @LalatenduMohanty, @smarterclayton, @deads2k
Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279.
Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279.
Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279.
Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279.
Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279.
Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279.
Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. Personally, I prefer 'manifests' naming here, but there was some concern that the Kubernetes/OpenShift manifests would be confused with the container image manifests in v2/openshift/release/manifests [1,2]. So I've used very explicit kubernetes-manifests, --apply-kubernetes-manifests, and --kubernetes-manifests-to-dir instead of my preferred manifests, --apply-manifests, and --manifests-to-dir. [1]: openshift#238 (comment) [2]: openshift#188 (comment)
Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. Personally, I prefer 'manifests' naming here, but there was some concern that the Kubernetes/OpenShift manifests would be confused with the container image manifests in v2/openshift/release/manifests [1,2]. So I've used very explicit kubernetes-manifests, --apply-kubernetes-manifests, and --kubernetes-manifests-to-dir instead of my preferred manifests, --apply-manifests, and --manifests-to-dir. The truncated digest hash in the example signature filename avoids any concerns about filename length limitations on restrictive operating systems or filesystems. [1]: openshift#238 (comment) [2]: openshift#188 (comment) [3]: openshift#188 (comment)
Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. Personally, I prefer 'manifests' naming here, so: * manifests directory * --apply-manifests and --manifests-to-dir options. But there was some concern that the Kubernetes/OpenShift manifests would be confused with the container image manifests in v2/openshift/release/manifests [1]. kubernetes-manifests was suggested [2], which would to a kubernetes-manifests directory and --apply-kubernetes-manifests and --kubernetes-manifests-to-dir options. But there was some concern about using Kubernetes anywhere [3], so the commit goes with a config directory and --apply-config and --config-to-dir. The truncated digest hash in the example signature filename avoids any concerns about filename length limitations on restrictive operating systems or filesystems [4]. [1]: openshift#238 (comment) [2]: openshift#188 (comment) [3]: openshift#188 (comment) [4]: openshift#188 (comment)
Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. Personally, I prefer 'manifests' naming here, so: * manifests directory * --apply-manifests and --manifests-to-dir options. But there was some concern that the Kubernetes/OpenShift manifests would be confused with the container image manifests in v2/openshift/release/manifests [1]. kubernetes-manifests was suggested [2], which would to a kubernetes-manifests directory and --apply-kubernetes-manifests and --kubernetes-manifests-to-dir options. But there was some concern about using Kubernetes anywhere [3], so the commit goes with a config directory and --apply-config and --config-to-dir options. The truncated digest hash in the example signature filename avoids any concerns about filename length limitations on restrictive operating systems or filesystems [4]. [1]: openshift#238 (comment) [2]: openshift#188 (comment) [3]: openshift#188 (comment) [4]: openshift#188 (comment)
…ncement Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. Personally, I prefer 'manifests' naming here, so: * manifests directory * --apply-manifests and --manifests-to-dir options. But there was some concern that the Kubernetes/OpenShift manifests would be confused with the container image manifests in v2/openshift/release/manifests [1]. kubernetes-manifests was suggested [2], which would to a kubernetes-manifests directory and --apply-kubernetes-manifests and --kubernetes-manifests-to-dir options. But there was some concern about using Kubernetes anywhere [3], so the commit goes with a config directory and --apply-config and --config-to-dir options. The truncated digest hash in the example signature filename avoids any concerns about filename length limitations on restrictive operating systems or filesystems [4]. [1]: openshift#238 (comment) [2]: openshift#188 (comment) [3]: openshift#188 (comment) [4]: openshift#188 (comment)
…ncement Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. Personally, I prefer 'manifests' naming here, so: * manifests directory * --apply-manifests and --manifests-to-dir options. But there was some concern that the Kubernetes/OpenShift manifests would be confused with the container image manifests in v2/openshift/release/manifests [1]. kubernetes-manifests was suggested [2], which would to a kubernetes-manifests directory and --apply-kubernetes-manifests and --kubernetes-manifests-to-dir options. But there was some concern about using Kubernetes anywhere [3], so the commit goes with a config directory and --apply-config and --config-to-dir options. The truncated digest hash in the example signature filename avoids any concerns about filename length limitations on restrictive operating systems or filesystems [4]. [1]: openshift#238 (comment) [2]: openshift#188 (comment) [3]: openshift#188 (comment) [4]: openshift#188 (comment)
…ncement Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. Personally, I prefer 'manifests' naming here, so: * manifests directory * --apply-manifests and --manifests-to-dir options. But there was some concern that the Kubernetes/OpenShift manifests would be confused with the container image manifests in v2/openshift/release/manifests [1]. kubernetes-manifests was suggested [2], which would to a kubernetes-manifests directory and --apply-kubernetes-manifests and --kubernetes-manifests-to-dir options. But there was some concern about using Kubernetes anywhere [3], so the commit goes with a config directory and --apply-config and --config-to-dir options. The truncated digest hash in the example signature filename avoids any concerns about filename length limitations on restrictive operating systems or filesystems [4]. [1]: openshift#238 (comment) [2]: openshift#188 (comment) [3]: openshift#188 (comment) [4]: openshift#188 (comment)
…ncement Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. Personally, I prefer 'manifests' naming here, so: * manifests directory * --apply-manifests and --manifests-to-dir options. But there was some concern that the Kubernetes/OpenShift manifests would be confused with the container image manifests in v2/openshift/release/manifests [1]. kubernetes-manifests was suggested [2], which would to a kubernetes-manifests directory and --apply-kubernetes-manifests and --kubernetes-manifests-to-dir options. But there was some concern about using Kubernetes anywhere [3], so the commit goes with a config directory and --apply-config and --config-to-dir options. The truncated digest hash in the example signature filename avoids any concerns about filename length limitations on restrictive operating systems or filesystems [4]. [1]: openshift#238 (comment) [2]: openshift#188 (comment) [3]: openshift#188 (comment) [4]: openshift#188 (comment)
Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. Personally, I prefer 'manifests' naming here, so: * manifests directory * --apply-manifests and --manifests-to-dir options. But there was some concern that the Kubernetes/OpenShift manifests would be confused with the container image manifests in v2/openshift/release/manifests [1]. kubernetes-manifests was suggested [2], which would to a kubernetes-manifests directory and --apply-kubernetes-manifests and --kubernetes-manifests-to-dir options. But there was some concern about using Kubernetes anywhere [3], so the commit goes with a config directory and options which leave the format implicit (--apply-release-image-signature and --release-image-signature-to-dir). The truncated digest hash in the example signature filename avoids any concerns about filename length limitations on restrictive operating systems or filesystems [4]. The narrow scoping vs openshift#188 accomodates pushback from Clayton in [5] and similar out of band discussion. [1]: openshift#238 (comment) [2]: openshift#188 (comment) [3]: openshift#188 (comment) [4]: openshift#188 (comment) [5]: openshift#188 (comment)
Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. Personally, I prefer 'manifests' naming here, so: * manifests directory * --apply-manifests and --manifests-to-dir options. But there was some concern that the Kubernetes/OpenShift manifests would be confused with the container image manifests in v2/openshift/release/manifests [1]. kubernetes-manifests was suggested [2], which would to a kubernetes-manifests directory and --apply-kubernetes-manifests and --kubernetes-manifests-to-dir options. But there was some concern about using Kubernetes anywhere [3], so the commit goes with a config directory and options which leave the format implicit (--apply-release-image-signature and --release-image-signature-to-dir). The truncated digest hash in the example signature filename avoids any concerns about filename length limitations on restrictive operating systems or filesystems [4]. The narrow scoping vs openshift#188 accommodates pushback from Clayton in [5] and similar out of band discussion. [1]: openshift#238 (comment) [2]: openshift#188 (comment) [3]: openshift#188 (comment) [4]: openshift#188 (comment) [5]: openshift#188 (comment)
Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. Personally, I prefer 'manifests' naming here, so: * manifests directory * --apply-manifests and --manifests-to-dir options. But there was some concern that the Kubernetes/OpenShift manifests would be confused with the container image manifests in v2/openshift/release/manifests [1]. kubernetes-manifests was suggested [2], which would to a kubernetes-manifests directory and --apply-kubernetes-manifests and --kubernetes-manifests-to-dir options. But there was some concern about using Kubernetes anywhere [3], so the commit goes with a config directory and options which leave the format implicit (--apply-release-image-signature and --release-image-signature-to-dir). The truncated digest hash in the example signature filename avoids any concerns about filename length limitations on restrictive operating systems or filesystems [4]. The narrow scoping vs openshift#188 accommodates pushback from Clayton in [5] and similar out of band discussion. [1]: openshift#238 (comment) [2]: openshift#188 (comment) [3]: openshift#188 (comment) [4]: openshift#188 (comment) [5]: openshift#188 (comment)
Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. Personally, I prefer 'manifests' naming here, so: * manifests directory * --apply-manifests and --manifests-to-dir options. But there was some concern that the Kubernetes/OpenShift manifests would be confused with the container image manifests in v2/openshift/release/manifests [1]. kubernetes-manifests was suggested [2], which would to a kubernetes-manifests directory and --apply-kubernetes-manifests and --kubernetes-manifests-to-dir options. But there was some concern about using Kubernetes anywhere [3], so the commit goes with a config directory and options which leave the format implicit (--apply-release-image-signature and --release-image-signature-to-dir). The truncated digest hash in the example signature filename avoids any concerns about filename length limitations on restrictive operating systems or filesystems [4]. The narrow scoping vs openshift#188 accommodates pushback from Clayton in [5] and similar out of band discussion. [1]: openshift#238 (comment) [2]: openshift#188 (comment) [3]: openshift#188 (comment) [4]: openshift#188 (comment) [5]: openshift#188 (comment)
Describing how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279. Personally, I prefer 'manifests' naming here, so: * manifests directory * --apply-manifests and --manifests-to-dir options. But there was some concern that the Kubernetes/OpenShift manifests would be confused with the container image manifests in v2/openshift/release/manifests [1]. kubernetes-manifests was suggested [2], which would to a kubernetes-manifests directory and --apply-kubernetes-manifests and --kubernetes-manifests-to-dir options. But there was some concern about using Kubernetes anywhere [3], so the commit goes with a config directory and options which leave the format implicit (--apply-release-image-signature and --release-image-signature-to-dir). The truncated digest hash in the example signature filename avoids any concerns about filename length limitations on restrictive operating systems or filesystems [4]. The narrow scoping vs openshift#188 accommodates pushback from Clayton in [5] and similar out of band discussion. The two-releases of backwards compatibility is based on [6]. I'm extrapolating from Clayton's comment to assume that "releases" means 4.(y+2), not 4.y -> 6.0. [1]: openshift#238 (comment) [2]: openshift#188 (comment) [3]: openshift#188 (comment) [4]: openshift#188 (comment) [5]: openshift#188 (comment) [6]: openshift#283 (comment)
This consolidates around abstract signature retrieval, removing the split between stores and locations which we grew in 9bbf366 (verify: Refactor the verify package to have common code, 2019-12-09, openshift#279). This will make it easier to make future changes like parallel HTTP(S) signature retrieval retrieval [1]. [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1840343
This consolidates around abstract signature retrieval, removing the split between stores and locations which we grew in 9bbf366 (verify: Refactor the verify package to have common code, 2019-12-09, openshift#279). This will make it easier to make future changes like parallel HTTP(S) signature retrieval retrieval [1]. [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1840343
This consolidates around abstract signature retrieval, removing the split between stores and locations which we grew in 9bbf366 (verify: Refactor the verify package to have common code, 2019-12-09, openshift#279). This will make it easier to make future changes like parallel HTTP(S) signature retrieval retrieval [1]. [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1840343
This consolidates around abstract signature retrieval, removing the split between stores and locations which we grew in 9bbf366 (verify: Refactor the verify package to have common code, 2019-12-09, openshift#279). This will make it easier to make future changes like parallel HTTP(S) signature retrieval retrieval [1]. [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1840343
This consolidates around abstract signature retrieval, removing the split between stores and locations which we grew in 9bbf366 (verify: Refactor the verify package to have common code, 2019-12-09, openshift#279). This will make it easier to make future changes like parallel HTTP(S) signature retrieval retrieval [1]. [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1840343
This is in preparation to move the verify code into library-go
for reuse in the admin tooling. verify has no dependencies,
payload specific logic is moved to pkg/cvo.
In addition, add a simple 64 entry cache of recent signatures
so that we can fetch the signatures in the current structure.
Written in a way to allow backport, but not explicitly targeted
at that (4.3 for in cluster verification is still being targeted).