-
Notifications
You must be signed in to change notification settings - Fork 404
Conversation
Signed-off-by: Moritz Johner <[email protected]>
keps/20200901-standard-crd.md
Outdated
3. and a mapping to translate the keys | ||
|
||
```yaml | ||
apiVersion: kes.io/v1alpha1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure if this is a good idea maybe we should mount this under k8s.io
, like external-secrets.k8s.io
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
external-secrets.k8s.io
sounds nice 😏 unless someone gets unhappy with the use of *.k8s.io
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I stumbled upon this slack thread in sig-architecture which points to the api-review process. From my understanding we really shouldn't use k8s.io
. If we plan to move the effort towards a Kubernetes SIG we should plan to use x-k8s.io
.
Signed-off-by: Moritz Johner <[email protected]>
Signed-off-by: Moritz Johner <[email protected]>
Signed-off-by: Moritz Johner <[email protected]>
I feel it is a disservice to generalize "many projects in the wild" and then almost line for line copy the itscontained/secret-manger crd with no attribution to the source. |
I think its fair to add a list of links to existing work which were used for inspiration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some random thoughts 🙃
keps/20200901-standard-crd.md
Outdated
3. and a mapping to translate the keys | ||
|
||
```yaml | ||
apiVersion: kes.io/v1alpha1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
external-secrets.k8s.io
sounds nice 😏 unless someone gets unhappy with the use of *.k8s.io
* Kubernetes (see #422) | ||
* noop (see #476) | ||
|
||
### Frontends |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Flydiverny I like the idea of a universal injector in addition to the secret sync to allow injection via multiple sources to runtime env or file, without the need of using (potentially) 3+ separate upstream injectors with varying installation and usage
* added `templateFrom` * add user story to provide app config * use data as object, not array * add related projects
I added all projects directly involved with the subject. I don't know if it's just me, but this sounds very polemic to me. If you take a close look at the commit history you will see that the origins of the spec lie in KES. From there i tried to improve by eliminating the tedious elements, repetitions and added a bunch of items from my wishlist. Eventually i went through the itscontained spec and aligned the spec in regards to the naming ( I pushed a bunch of changes that resulted from the discussion so far. I see a bunch of existing topics and would like to propose the following:
For the ongoing discussions (let me try to summarize this) we have:
|
@moolen We are here collaborating together - OSS is built off of a foundation of improving existing ideas. I just think it all goes down the toilet when the chain of idea attribution is broken (why we put kes and cert-manager front and center as attribution on our readme). No harm no foul 🤓. Injection does not fit into MVP but was stated just so it was kept in sight. Agree with backlog |
Hey! So, to add to the discussion, also prefer one backend per ES, basically because of the same reasons already stated here. If we backlog 'merging/patching existing secrets' what does that mean for the CRD? We foresee it somehow? Right now at Container Solutions we are moving our ES solution forward in the direction of the CRD draft at least in the obviously good things like secret refreshing and multiple key values per secret (things that were on our backlog anyways). Even if we decide to move to a solution built from scratch later on, I think it is good so I have some people with momentum to move to the new effort. Anyway, we want to help wherever needed! |
I like the latest updates 👍 |
status: | ||
lastSync: 2020-09-01T18:19:17.263Z # ISO 8601 date string | ||
status: success # or failure | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A message
field with the stringified error would be great for diagnostics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, any reason not to go with a condition array block here for readiness?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, conditions would fit in here nicely!
What about:
status:
# represents the current phase of secret sync:
# * Pending | ES created, controller did not (yet) sync the ES or other dependencies are missing (e.g. secret store or configmap template)
# * Syncing | ES is being actively synced according to spec
# * Failing | Secret can not be synced, this might require user intervention
# * Failed | ES can not be synced right now and will not able to
# * Completed | ES was synced successfully (one-time use only)
phase: Syncing
lastSyncTime: "2020-09-23T16:27:53Z"
failureReason: "..."
failureMessage: "..."
conditions:
- type: InSync
status: "True" # False if last sync was not successful
reason: "SecretSynced"
message: "Secret was synced"
lastTransitionTime: "2019-08-12T12:33:02Z"
lastHeartbeatTime: "2020-09-23T16:27:53Z"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, a few comments
lastHeartbeatTime
doesnt need to be used here, that is generally only for conditions which need explicit heart beatsfailureReason
,failureMessage
already fit within the condition, those would better fit with events in this case rather than explicitly in the base status.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding lastHeartbeatTime
: i think this is the case here. The InSync
condition should be updated with every sync. We should consider renaming this to lastSyncTime
.
failure*
as attributes are duplicates, true. Ill remove them.
In #413 there's a PR to recreate the target resource / secret if its deleted. Is this something we should cover here? I think it makes sense and adds a behaviour similar to deployments and pods. |
@Flydiverny i added a behavior section for the |
Yepp, thats one of the underlying issues people have. Sometimes people also want to just trigger a refresh manually, especially interesting when polling is disabled. I guess in the end it boils down to not being able to have "live" updates due to API rate limit/cost or implementation limits. |
First of all i'd like to thank all of you for your feedback, comments and effort you brought into this. It's good to see that here are a lot of people that want to drive this forward. Thank you! I think this CRD spec is good enough to be implemented in a PoC. I'd like to leave this PR open for now, i don't think there's a lot of value having this CRD Spec in the repo 😅. Let's continue the discussion about merging efforts in #47. |
Hey peeps, I'm implementing the CRDs to bootstrap the project and got a question regarding the The Container Solutions project uses:...
kind: SecretStore
...
auth:
secretRef:
name: externalsecret-operator-credentials-asm Where, inside the apiVersion: v1
kind: Secret
metadata:
name: credentials-asm
labels:
type: asm
type: Opaque
stringData:
credentials.json: |-
{
"accessKeyID": "",
"secretAccessKey": "",
"sessionToken": ""
} Each Store backend unmarshals the JSON data (with the hardcoded The iscontained project uses:A specific struct for each Store backend, with the required fields, like: type AWSAuth struct {
AccessKeyID *smmeta.SecretKeySelector `json:"accessKeyID,omitempty"`
SecretAccessKey *smmeta.SecretKeySelector `json:"secretAccessKey,omitempty"`
Role *smmeta.SecretKeySelector `json:"role,omitempty"`
} This is possible because there's no I'd like to have inputs on what you think is the best approach. I believe the iscontained approach could be better:
Any thoughts? |
Wouldn't it be possible even with the type field to have an type specific auth struct? |
With the defined CRD spec: ...
spec:
store:
type: vault
parameters: # provider specific k/v pairs
server: "https://vault.example.com"
path: path/on/vault/store
auth: {} # provider specific k/v pairs We need to have the following:
The main problem becomes the lack of validation on the CRD level and the need to implement logic on each backend to decode the Secret reference and find the specific key/value pairs for its authentication. If we implement like ...
spec:
aws:
region: ...
authSecretRef:
accessKeyID:
key: access-key
name: aws-key
secretAccessKey:
key: secret-key
name: aws-key We have:
Even though we would need to add new types inside the CRD each type a new Store backend is implemented, I believe the overall architecture/code base would be more explicit and clean than requiring each backend to decode an interface when authenticating. What do you think? @Flydiverny @moolen @knelasevero |
I would like to move this CRD Spec document to the (yet to be created) v2 Repository. This way it should be easier to propose changes. |
@moolen yeah, let's do it! We can work on the |
Could have a separate repo for just the specs as well if we want? |
Yeah, 💯 ! |
|
||
# the amount of time before the values will be read again from the store | ||
# may be set to zero to fetch and create it once | ||
refreshInterval: "1h" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we have a default value for refreshInterval? This can be a sensitive field, since it might make users reach their provider API limits.
If not, we need to mark this as required (and might give some advice on how to estimate it).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Default reconcile loops happen between 10h (if we don't change anything to the generated stuff), right?
via external-secrets/kubernetes-external-secrets#477 taken from: external-secrets/kubernetes-external-secrets@03f82b1 Signed-off-by: Moritz Johner <[email protected]>
via external-secrets/kubernetes-external-secrets#477 taken from: external-secrets/kubernetes-external-secrets@03f82b1 Signed-off-by: Moritz Johner <[email protected]>
I moved the CRD Spec to a dedicated repository. I fixed the
Let's move discussion over to the new repo. |
Lemme close this here, see you on the other side 👋 |
Request for comments
The idea is to use this draft PR for discussing only the CRD Spec mentioned in #47.
❗ Technology discussions are off-topic
CHANGELOG:
backend
tostore
backend
fromExternalSecret
externalSecretClassName
toclassName
dataFrom
into specTODO:
#474
w/ service accounts)?