Skip to content

Conversation

@smarterclayton
Copy link
Contributor

This is in preparation to move the verify code into library-go
for reuse in the admin tooling. verify has no dependencies,
payload specific logic is moved to pkg/cvo.

In addition, add a simple 64 entry cache of recent signatures
so that we can fetch the signatures in the current structure.

Written in a way to allow backport, but not explicitly targeted
at that (4.3 for in cluster verification is still being targeted).

The verify package should be usable from multiple clients. Refactor
it to separate dependencies on the payload in preparation for a move
to an external library.
Keep a small cache of digests to verified signatures to avoid
temporary errors in accessing the content store. Add a Signatures()
field that allows us to access these signatures. This will allow
us to record the state of the cache into the cluster at intervals.
@openshift-ci-robot openshift-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Dec 10, 2019
Comment on lines 75 to 78
// DefaultClient uses the default http.Client for accessing signatures.
var DefaultClient = simpleClientBuilder{}

// simpleClientBuilder implements the ClientBuilder interface and may be used for testing.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why add DefaultClient to package api?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So test packages in others can use it. I had to use it in 2-3 packages that wouldn't be in verify (and might not be in library go). It's a standard implementation.

name string
update *payload.Update
want bool
wantErr bool
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Neither want nor wantErr are very specific about what came back. Can we make wantErr a regexp or string matching the expected error message? And want something about the set of loaded keys?

return fmt.Errorf("unable to locate a valid signature for one or more sources")
}

v.cacheVerification(releaseDigest, signedWith)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why cache signedWith? Seems like this is a boolean cache "yes, we verified that digest". Possibly interesting metadata would include the signatures' and signing keys' expiration dates, so we knew when to invalidate the cache.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because we're going to start saving these in a config map on cluster in the future so if the signing servers go down we can still upgrade to a version we saw in available versions.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because we're going to start saving these in a config map on cluster in the future...

I still don't see why that means "cache the sigs in memory". Can we leave this off now and grow it back when we add the cache-to-ConfigMap functionality?


// Signatures returns a copy of any cached signatures that have been validated
// so far. It may return no signatures.
func (v *ReleaseVerifier) Signatures() map[string][][]byte {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would you want all of these? I think Verify is a sufficient API. If this is "which sigs will oc have to pack into a ConfigMap for disconnected folks?", I'd rather have:

 func (v *ReleaseVerifier) Signatures(digest string) [][]byte

or some such.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to get all of them in bulk at once for saving.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to get all of them in bulk at once for saving.

How about configuring a helper that gets called on signatures when they are added to the cache? And another that is called when they are evicted? Then your mirror case or "protect from the sig store going down" case could be addressed by attaching a collecter to the added-to-cache callback. And you'd have a way to evict ConfigMaps in the "protect from the sig store going down" case when we dropped them from the in-memory cache.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a controller feels super heavy weight. Why do we need something more complex than "get" and "write"?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need something more complex than "get" and "write"?

Maybe those are fine. But I like decoupled logic so the verifier can say "I verify sigs, and if you want to bolt a cache or whatever on, here's where you hand over your callbacks". Then the code that handles verification is all nice and compact, and the code that handles caching can live somewhere else all nice and compact, and the package that wants caching verification can glue the two together to get what they want.

klog.V(4).Infof("signature %q is not valid: %v", path, err)
continue
}
delete(remaining, k)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's some contention between efficiently verifying a digest, where you only need one sig for each keyring, and shipping sigs around, where you may want too keep walking to find the "best" valid sig for each keyring. Best might be "all valid sigs" or "latest expiration date" or "strongest cypher" or whatever. Do we need a mode toggle to switch between efficient verification and interesting-sig collection?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we get into that model, we could do that. But I think that's cart before horse, and if we need it we would have the cache function potentially manage that.

if len(v.signatureCache) < maxSignatureCacheSize {
break
}
delete(v.signatureCache, k)
Copy link
Member

@wking wking Dec 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is pretty basic, and could lead to flapping two recent digests while we hold a bunch of ancient digests in the cache. Is there some LRU thing we can pull off a shelf?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess what we want is time aware least recently used (TLRU) eviction, with the time component being min(sig expiration, key expiration).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Map walk is random. If you get flapping you should go play the lottery and retire on it.

I don't think we need anything more than random.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From a 2013 Go comment:

Don't rely on the ordering for ranges over maps. Relying on it to be random is just as
bad as relying on it to be sorted, or repeatable, or whatever. Making it random is an
attempt to prevent people from relying on the order, not an invitation to rely on
the order for randomness.

Is that still current? The spec has:

The iteration order over maps is not specified and is not guaranteed to be the same from one iteration to the next.

without committing to a particular quality of randomness.

Used by the cvo unit tests.
Enable the verifier to lookup signatures by digest. A caller may
create a copy of a ReleaseVerifier via WithStores(...) to include
one or more sources for those signatures. Errors on retrieval of
signature content are logged.

Rename the current `store` field to `location` to better distinguish
the two types.
As a cluster verifies signatures for release digests, flush those
to an on cluster config map that ensures that even if the remote endpoint
is down the cluster can still verify those signatures.
@openshift-ci-robot openshift-ci-robot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Dec 11, 2019
@smarterclayton
Copy link
Contributor Author

I pushed the two commits that write back objects to the cluster, gives some better context for the existence of those methods.

@smarterclayton
Copy link
Contributor Author

/retest

@smarterclayton
Copy link
Contributor Author

integrated some feedback and responded to others, PTAL (including the newer commits so you can see why I had those methods)

@smarterclayton
Copy link
Contributor Author

stores []*url.URL
verifiers map[string]openpgp.EntityList

locations []*url.URL
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why keep the URI slice? Can't we create just create SignatureStores for these in NewReleaseVerifier? Or drop locations from NewReleaseVerifier and have NewFromConfigMapData (or whatever we end up calling that) create the stores for each URI and call WithStores.

Copy link
Member

@wking wking Dec 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've pushed up a verify-store-api branch (currently at 0cc3e41) to do this, polish the APIs up a bit more (e.g. using a callback so sig-store consumers can cancel sig iteration), and decompose things into more, smaller subpackages. Still need to go through and rework the persistence logic to use my preferred add/evict callbacks, but the API changes I already have in my branch are fairly orthogonal to that, so they should be ready for review now.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've pushed up a verify-store-api branch...

Spun this out into the parallel #294.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Picking up this change, or not, does not appear to be hugely impactful especially if we consider that much of this code may be OBE post 4.4 if we deploy a local cincy. Since this is holding up other 4.4 changes we need to make a decision. I would expect that this new code will be owned by the OTA team. Can we therefore move the final approval to that team and @LalatenduMohanty

jottofar added a commit to jottofar/enhancements that referenced this pull request Mar 11, 2020
NOTE: Since @wking is on vacation, created this PR to continue work
here.

Stubbing out how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279.

CC @wking, @LalatenduMohanty, @smarterclayton, @deads2k
jottofar added a commit to jottofar/enhancements that referenced this pull request Mar 11, 2020
NOTE: Since @wking is on vacation, created this PR to continue work
here.

Stubbing out how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279.

CC @wking, @LalatenduMohanty, @smarterclayton, @deads2k
jottofar added a commit to jottofar/enhancements that referenced this pull request Mar 12, 2020
NOTE: Since @wking is on vacation, created this PR to continue work
here.

Stubbing out how we're going to feed restricted-network clusters the release-signature ConfigMaps that the cluster-version operator reads since openshift/cluster-version-operator#279.

CC @wking, @LalatenduMohanty, @smarterclayton, @deads2k
wking added a commit to wking/openshift-enhancements that referenced this pull request Mar 15, 2020
Describing how we're going to feed restricted-network clusters the
release-signature ConfigMaps that the cluster-version operator reads
since openshift/cluster-version-operator#279.
wking added a commit to wking/openshift-enhancements that referenced this pull request Mar 15, 2020
Describing how we're going to feed restricted-network clusters the
release-signature ConfigMaps that the cluster-version operator reads
since openshift/cluster-version-operator#279.
wking added a commit to wking/openshift-enhancements that referenced this pull request Mar 15, 2020
Describing how we're going to feed restricted-network clusters the
release-signature ConfigMaps that the cluster-version operator reads
since openshift/cluster-version-operator#279.
wking added a commit to wking/openshift-enhancements that referenced this pull request Mar 15, 2020
Describing how we're going to feed restricted-network clusters the
release-signature ConfigMaps that the cluster-version operator reads
since openshift/cluster-version-operator#279.
wking added a commit to wking/openshift-enhancements that referenced this pull request Mar 15, 2020
Describing how we're going to feed restricted-network clusters the
release-signature ConfigMaps that the cluster-version operator reads
since openshift/cluster-version-operator#279.
wking added a commit to wking/openshift-enhancements that referenced this pull request Mar 17, 2020
Describing how we're going to feed restricted-network clusters the
release-signature ConfigMaps that the cluster-version operator reads
since openshift/cluster-version-operator#279.
wking added a commit to wking/openshift-enhancements that referenced this pull request Mar 17, 2020
Describing how we're going to feed restricted-network clusters the
release-signature ConfigMaps that the cluster-version operator reads
since openshift/cluster-version-operator#279.

Personally, I prefer 'manifests' naming here, but there was some
concern that the Kubernetes/OpenShift manifests would be confused with
the container image manifests in v2/openshift/release/manifests [1,2].
So I've used very explicit kubernetes-manifests,
--apply-kubernetes-manifests, and --kubernetes-manifests-to-dir
instead of my preferred manifests, --apply-manifests, and
--manifests-to-dir.

[1]: openshift#238 (comment)
[2]: openshift#188 (comment)
wking added a commit to wking/openshift-enhancements that referenced this pull request Mar 17, 2020
Describing how we're going to feed restricted-network clusters the
release-signature ConfigMaps that the cluster-version operator reads
since openshift/cluster-version-operator#279.

Personally, I prefer 'manifests' naming here, but there was some
concern that the Kubernetes/OpenShift manifests would be confused with
the container image manifests in v2/openshift/release/manifests [1,2].
So I've used very explicit kubernetes-manifests,
--apply-kubernetes-manifests, and --kubernetes-manifests-to-dir
instead of my preferred manifests, --apply-manifests, and
--manifests-to-dir.

The truncated digest hash in the example signature filename avoids any
concerns about filename length limitations on restrictive operating
systems or filesystems.

[1]: openshift#238 (comment)
[2]: openshift#188 (comment)
[3]: openshift#188 (comment)
wking added a commit to wking/openshift-enhancements that referenced this pull request Mar 20, 2020
Describing how we're going to feed restricted-network clusters the
release-signature ConfigMaps that the cluster-version operator reads
since openshift/cluster-version-operator#279.

Personally, I prefer 'manifests' naming here, so:

* manifests directory
* --apply-manifests and --manifests-to-dir options.

But there was some concern that the Kubernetes/OpenShift manifests
would be confused with the container image manifests in
v2/openshift/release/manifests [1].  kubernetes-manifests was
suggested [2], which would to a kubernetes-manifests directory and
--apply-kubernetes-manifests and --kubernetes-manifests-to-dir
options.

But there was some concern about using Kubernetes anywhere [3], so the
commit goes with a config directory and --apply-config and
--config-to-dir.

The truncated digest hash in the example signature filename avoids any
concerns about filename length limitations on restrictive operating
systems or filesystems [4].

[1]: openshift#238 (comment)
[2]: openshift#188 (comment)
[3]: openshift#188 (comment)
[4]: openshift#188 (comment)
wking added a commit to wking/openshift-enhancements that referenced this pull request Mar 20, 2020
Describing how we're going to feed restricted-network clusters the
release-signature ConfigMaps that the cluster-version operator reads
since openshift/cluster-version-operator#279.

Personally, I prefer 'manifests' naming here, so:

* manifests directory
* --apply-manifests and --manifests-to-dir options.

But there was some concern that the Kubernetes/OpenShift manifests
would be confused with the container image manifests in
v2/openshift/release/manifests [1].  kubernetes-manifests was
suggested [2], which would to a kubernetes-manifests directory and
--apply-kubernetes-manifests and --kubernetes-manifests-to-dir
options.

But there was some concern about using Kubernetes anywhere [3], so the
commit goes with a config directory and --apply-config and
--config-to-dir options.

The truncated digest hash in the example signature filename avoids any
concerns about filename length limitations on restrictive operating
systems or filesystems [4].

[1]: openshift#238 (comment)
[2]: openshift#188 (comment)
[3]: openshift#188 (comment)
[4]: openshift#188 (comment)
wking added a commit to wking/openshift-enhancements that referenced this pull request Mar 20, 2020
…ncement

Describing how we're going to feed restricted-network clusters the
release-signature ConfigMaps that the cluster-version operator reads
since openshift/cluster-version-operator#279.

Personally, I prefer 'manifests' naming here, so:

* manifests directory
* --apply-manifests and --manifests-to-dir options.

But there was some concern that the Kubernetes/OpenShift manifests
would be confused with the container image manifests in
v2/openshift/release/manifests [1].  kubernetes-manifests was
suggested [2], which would to a kubernetes-manifests directory and
--apply-kubernetes-manifests and --kubernetes-manifests-to-dir
options.

But there was some concern about using Kubernetes anywhere [3], so the
commit goes with a config directory and --apply-config and
--config-to-dir options.

The truncated digest hash in the example signature filename avoids any
concerns about filename length limitations on restrictive operating
systems or filesystems [4].

[1]: openshift#238 (comment)
[2]: openshift#188 (comment)
[3]: openshift#188 (comment)
[4]: openshift#188 (comment)
wking added a commit to wking/openshift-enhancements that referenced this pull request Mar 24, 2020
…ncement

Describing how we're going to feed restricted-network clusters the
release-signature ConfigMaps that the cluster-version operator reads
since openshift/cluster-version-operator#279.

Personally, I prefer 'manifests' naming here, so:

* manifests directory
* --apply-manifests and --manifests-to-dir options.

But there was some concern that the Kubernetes/OpenShift manifests
would be confused with the container image manifests in
v2/openshift/release/manifests [1].  kubernetes-manifests was
suggested [2], which would to a kubernetes-manifests directory and
--apply-kubernetes-manifests and --kubernetes-manifests-to-dir
options.

But there was some concern about using Kubernetes anywhere [3], so the
commit goes with a config directory and --apply-config and
--config-to-dir options.

The truncated digest hash in the example signature filename avoids any
concerns about filename length limitations on restrictive operating
systems or filesystems [4].

[1]: openshift#238 (comment)
[2]: openshift#188 (comment)
[3]: openshift#188 (comment)
[4]: openshift#188 (comment)
wking added a commit to wking/openshift-enhancements that referenced this pull request Mar 30, 2020
…ncement

Describing how we're going to feed restricted-network clusters the
release-signature ConfigMaps that the cluster-version operator reads
since openshift/cluster-version-operator#279.

Personally, I prefer 'manifests' naming here, so:

* manifests directory
* --apply-manifests and --manifests-to-dir options.

But there was some concern that the Kubernetes/OpenShift manifests
would be confused with the container image manifests in
v2/openshift/release/manifests [1].  kubernetes-manifests was
suggested [2], which would to a kubernetes-manifests directory and
--apply-kubernetes-manifests and --kubernetes-manifests-to-dir
options.

But there was some concern about using Kubernetes anywhere [3], so the
commit goes with a config directory and --apply-config and
--config-to-dir options.

The truncated digest hash in the example signature filename avoids any
concerns about filename length limitations on restrictive operating
systems or filesystems [4].

[1]: openshift#238 (comment)
[2]: openshift#188 (comment)
[3]: openshift#188 (comment)
[4]: openshift#188 (comment)
wking added a commit to wking/openshift-enhancements that referenced this pull request Mar 30, 2020
…ncement

Describing how we're going to feed restricted-network clusters the
release-signature ConfigMaps that the cluster-version operator reads
since openshift/cluster-version-operator#279.

Personally, I prefer 'manifests' naming here, so:

* manifests directory
* --apply-manifests and --manifests-to-dir options.

But there was some concern that the Kubernetes/OpenShift manifests
would be confused with the container image manifests in
v2/openshift/release/manifests [1].  kubernetes-manifests was
suggested [2], which would to a kubernetes-manifests directory and
--apply-kubernetes-manifests and --kubernetes-manifests-to-dir
options.

But there was some concern about using Kubernetes anywhere [3], so the
commit goes with a config directory and --apply-config and
--config-to-dir options.

The truncated digest hash in the example signature filename avoids any
concerns about filename length limitations on restrictive operating
systems or filesystems [4].

[1]: openshift#238 (comment)
[2]: openshift#188 (comment)
[3]: openshift#188 (comment)
[4]: openshift#188 (comment)
wking added a commit to wking/openshift-enhancements that referenced this pull request Apr 8, 2020
Describing how we're going to feed restricted-network clusters the
release-signature ConfigMaps that the cluster-version operator reads
since openshift/cluster-version-operator#279.

Personally, I prefer 'manifests' naming here, so:

* manifests directory
* --apply-manifests and --manifests-to-dir options.

But there was some concern that the Kubernetes/OpenShift manifests
would be confused with the container image manifests in
v2/openshift/release/manifests [1].  kubernetes-manifests was
suggested [2], which would to a kubernetes-manifests directory and
--apply-kubernetes-manifests and --kubernetes-manifests-to-dir
options.

But there was some concern about using Kubernetes anywhere [3], so the
commit goes with a config directory and options which leave the format
implicit (--apply-release-image-signature and
--release-image-signature-to-dir).

The truncated digest hash in the example signature filename avoids any
concerns about filename length limitations on restrictive operating
systems or filesystems [4].

The narrow scoping vs openshift#188 accomodates pushback from Clayton in [5]
and similar out of band discussion.

[1]: openshift#238 (comment)
[2]: openshift#188 (comment)
[3]: openshift#188 (comment)
[4]: openshift#188 (comment)
[5]: openshift#188 (comment)
wking added a commit to wking/openshift-enhancements that referenced this pull request Apr 8, 2020
Describing how we're going to feed restricted-network clusters the
release-signature ConfigMaps that the cluster-version operator reads
since openshift/cluster-version-operator#279.

Personally, I prefer 'manifests' naming here, so:

* manifests directory
* --apply-manifests and --manifests-to-dir options.

But there was some concern that the Kubernetes/OpenShift manifests
would be confused with the container image manifests in
v2/openshift/release/manifests [1].  kubernetes-manifests was
suggested [2], which would to a kubernetes-manifests directory and
--apply-kubernetes-manifests and --kubernetes-manifests-to-dir
options.

But there was some concern about using Kubernetes anywhere [3], so the
commit goes with a config directory and options which leave the format
implicit (--apply-release-image-signature and
--release-image-signature-to-dir).

The truncated digest hash in the example signature filename avoids any
concerns about filename length limitations on restrictive operating
systems or filesystems [4].

The narrow scoping vs openshift#188 accommodates pushback from Clayton in [5]
and similar out of band discussion.

[1]: openshift#238 (comment)
[2]: openshift#188 (comment)
[3]: openshift#188 (comment)
[4]: openshift#188 (comment)
[5]: openshift#188 (comment)
wking added a commit to wking/openshift-enhancements that referenced this pull request Apr 8, 2020
Describing how we're going to feed restricted-network clusters the
release-signature ConfigMaps that the cluster-version operator reads
since openshift/cluster-version-operator#279.

Personally, I prefer 'manifests' naming here, so:

* manifests directory
* --apply-manifests and --manifests-to-dir options.

But there was some concern that the Kubernetes/OpenShift manifests
would be confused with the container image manifests in
v2/openshift/release/manifests [1].  kubernetes-manifests was
suggested [2], which would to a kubernetes-manifests directory and
--apply-kubernetes-manifests and --kubernetes-manifests-to-dir
options.

But there was some concern about using Kubernetes anywhere [3], so the
commit goes with a config directory and options which leave the format
implicit (--apply-release-image-signature and
--release-image-signature-to-dir).

The truncated digest hash in the example signature filename avoids any
concerns about filename length limitations on restrictive operating
systems or filesystems [4].

The narrow scoping vs openshift#188 accommodates pushback from Clayton in [5]
and similar out of band discussion.

[1]: openshift#238 (comment)
[2]: openshift#188 (comment)
[3]: openshift#188 (comment)
[4]: openshift#188 (comment)
[5]: openshift#188 (comment)
wking added a commit to wking/openshift-enhancements that referenced this pull request Apr 8, 2020
Describing how we're going to feed restricted-network clusters the
release-signature ConfigMaps that the cluster-version operator reads
since openshift/cluster-version-operator#279.

Personally, I prefer 'manifests' naming here, so:

* manifests directory
* --apply-manifests and --manifests-to-dir options.

But there was some concern that the Kubernetes/OpenShift manifests
would be confused with the container image manifests in
v2/openshift/release/manifests [1].  kubernetes-manifests was
suggested [2], which would to a kubernetes-manifests directory and
--apply-kubernetes-manifests and --kubernetes-manifests-to-dir
options.

But there was some concern about using Kubernetes anywhere [3], so the
commit goes with a config directory and options which leave the format
implicit (--apply-release-image-signature and
--release-image-signature-to-dir).

The truncated digest hash in the example signature filename avoids any
concerns about filename length limitations on restrictive operating
systems or filesystems [4].

The narrow scoping vs openshift#188 accommodates pushback from Clayton in [5]
and similar out of band discussion.

[1]: openshift#238 (comment)
[2]: openshift#188 (comment)
[3]: openshift#188 (comment)
[4]: openshift#188 (comment)
[5]: openshift#188 (comment)
wking added a commit to wking/openshift-enhancements that referenced this pull request Apr 9, 2020
Describing how we're going to feed restricted-network clusters the
release-signature ConfigMaps that the cluster-version operator reads
since openshift/cluster-version-operator#279.

Personally, I prefer 'manifests' naming here, so:

* manifests directory
* --apply-manifests and --manifests-to-dir options.

But there was some concern that the Kubernetes/OpenShift manifests
would be confused with the container image manifests in
v2/openshift/release/manifests [1].  kubernetes-manifests was
suggested [2], which would to a kubernetes-manifests directory and
--apply-kubernetes-manifests and --kubernetes-manifests-to-dir
options.

But there was some concern about using Kubernetes anywhere [3], so the
commit goes with a config directory and options which leave the format
implicit (--apply-release-image-signature and
--release-image-signature-to-dir).

The truncated digest hash in the example signature filename avoids any
concerns about filename length limitations on restrictive operating
systems or filesystems [4].

The narrow scoping vs openshift#188 accommodates pushback from Clayton in [5]
and similar out of band discussion.

The two-releases of backwards compatibility is based on [6].  I'm
extrapolating from Clayton's comment to assume that "releases" means
4.(y+2), not 4.y -> 6.0.

[1]: openshift#238 (comment)
[2]: openshift#188 (comment)
[3]: openshift#188 (comment)
[4]: openshift#188 (comment)
[5]: openshift#188 (comment)
[6]: openshift#283 (comment)
wking added a commit to wking/cluster-version-operator that referenced this pull request Jun 16, 2020
This consolidates around abstract signature retrieval, removing the
split between stores and locations which we grew in 9bbf366
(verify: Refactor the verify package to have common code, 2019-12-09, openshift#279).
This will make it easier to make future changes like parallel HTTP(S)
signature retrieval retrieval [1].

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1840343
wking added a commit to wking/cluster-version-operator that referenced this pull request Jun 16, 2020
This consolidates around abstract signature retrieval, removing the
split between stores and locations which we grew in 9bbf366
(verify: Refactor the verify package to have common code, 2019-12-09, openshift#279).
This will make it easier to make future changes like parallel HTTP(S)
signature retrieval retrieval [1].

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1840343
wking added a commit to wking/cluster-version-operator that referenced this pull request Jun 16, 2020
This consolidates around abstract signature retrieval, removing the
split between stores and locations which we grew in 9bbf366
(verify: Refactor the verify package to have common code, 2019-12-09, openshift#279).
This will make it easier to make future changes like parallel HTTP(S)
signature retrieval retrieval [1].

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1840343
wking added a commit to wking/cluster-version-operator that referenced this pull request Jun 26, 2020
This consolidates around abstract signature retrieval, removing the
split between stores and locations which we grew in 9bbf366
(verify: Refactor the verify package to have common code, 2019-12-09, openshift#279).
This will make it easier to make future changes like parallel HTTP(S)
signature retrieval retrieval [1].

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1840343
openshift-cherrypick-robot pushed a commit to openshift-cherrypick-robot/cluster-version-operator that referenced this pull request Dec 10, 2020
This consolidates around abstract signature retrieval, removing the
split between stores and locations which we grew in 9bbf366
(verify: Refactor the verify package to have common code, 2019-12-09, openshift#279).
This will make it easier to make future changes like parallel HTTP(S)
signature retrieval retrieval [1].

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1840343
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants