Skip to content

Conversation

mostafaCamel
Copy link

This proposal is to be able to reference ConfigMap as a value source for a Param in TaskRun or PipelineRun . This is to support Kubernetes native options (ConfigMap) as value source along with direct value passed to TaskRun and PipelineRun.

Note: this proposal is basically picking up upon the closed (unmerged) proposal.

@tekton-robot tekton-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Mar 7, 2025
@mostafaCamel mostafaCamel force-pushed the enable-valuefrom-in-param branch 2 times, most recently from e6acd27 to c34749a Compare March 7, 2025 18:25
@mostafaCamel
Copy link
Author

For whatever it's worth, I've made the code change in my fork (I have yet to add documentation and tests)
mostafaCamel/pipeline@0a3f516

@waveywaves
Copy link
Member

added the TEP as an agenda item to be discussed in the upcoming meeting

@mostafaCamel
Copy link
Author

Thanks Vibhav! I am new to the project so let me know if I need to/can attend

@tekton-robot tekton-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 13, 2025
@waveywaves
Copy link
Member

@mostafaCamel you are welcome to join the working group calls ! Check the github.com/tektoncd/community repo for more info 🙂

@vdemeester
Copy link
Member

/assign

@waveywaves
Copy link
Member

/assign

- For each of the params, if the param has a non-nil `valueFrom` field:
- look for the configmap with name `Param.ValueFrom.ConfigMapKeyRef.Name` inside the same k8s namespace of the PipelineRun/TaskRun. PipelineRun to fail if the configmap is not found
- inside the configmap, look for the key `Param.ValueFrom.ConfigMapKeyRef.Key`. PipelineRun to fail if the key is not found and if `Param.ValueFrom.ConfigMapKeyRef.Optional` is false (this is the default value of this bool)
- Marshal the value obtained from the configmap into a byte array then convert it into a `ParamValue` object via the `*ParamValue.UnmarshalJSON` custom method
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While working on the draft tektoncd/pipeline#8642 , I realized the new feature is not working for inline parameters. This is due to the following function createParamSpecFromParam which expects a Param to have a Type during validation (which is not the case for parameters with value sources).

What I could do is to change the behavior of this function so that it defaults to Type string when populating a ParamSpec from a param with ValueSource and to tell the users that params with value sources will always be treated as strings.

Another option is not to allow inline parameters if value source is used (in order to dynamically determine the type of Param in the first reconcile

https://github.com/tektoncd/pipeline/blob/781c2800649e24e1c098daaad95eba4016b86def/pkg/apis/pipeline/v1/taskrun_validation.go#L200

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking again at the code of the function createParamSpecFromParam , it seems that these freshly-created ParamSpecs are only used for validation and are not appended to the ParamSpecs array of TaskSpec/PipelineSpec . Maybe what we can do is to:

  • skip the creation of ParamSpecs out of Param if Param still has non-resolved valuesource.
    • so skip the validation of this ParamSpec
  • After the first reconcile run, the Param value will be resolved
  • In subsequent validations, the ParamSpec will be created and validated.

This will not affect the CRD content as these ParamSpecs are never appended to PipelineSpec/TaskSpec anyways. However, we're getting exposed to getting validation errors on these "fresh" ParamSpecs after the TaskRun/PipelineRun creation. These are the validations done

				errs = errs.Also(ValidateParameterTypes(ctx, paramSpec))
				errs = errs.Also(ValidateParameterVariables(ctx, pt.TaskSpec.Steps, paramSpec))
				errs = errs.Also(ValidateUsageOfDeclaredParameters(ctx, pt.TaskSpec.Steps, paramSpec))
[...]
		errs = errs.Also(ValidatePipelineParameterVariables(ctx, ps.PipelineSpec.Tasks, paramSpec))
		errs = errs.Also(validatePipelineTaskParameterUsage(ps.PipelineSpec.Tasks, paramSpec))

https://github.com/tektoncd/pipeline/blob/23b3deefcea58e469e5775c4b39f6657dc80b538/pkg/apis/pipeline/v1/pipelinerun_validation.go#L218

Another thing worth noting is how this paramspec is being created in createParamSpecFromParam

        value := p.Value
	pSpec := ParamSpec{
		Name:    p.Name,
		Default: &value,
		Type:    p.Value.Type,
	}

paramSpec.Default is a pointer while param.Value is not a pointer. So paramSpec.Default will be a non-null field with an empty value instead of null/empty field

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

During the working group meeting today, it was suggested we can support inline parameters by using the Type field. I checked and this field is only in ParamSpec and ParamValue , not in Param
https://github.com/tektoncd/pipeline/blob/990917dffc997071df470be66aed30a10338a5e0/pkg/apis/pipeline/v1/param_types.go#L191-L196

What I can do is to add an optional Type field inside the ValueSource struct. Then:

  • during createParamSpecFromParam in validation (for inline parameters) , the ParamSpec type will be populated based on the ValueSource type (and will default to string if ValueSource.Type is not defined by the user).
  • during the value fetching from the configmap and the creation of a new ParamValue instance, the Type in this ParamValue will be validated against (i.e. fail if different than) the Type (if defined) in ValueSource

@mostafaCamel
Copy link
Author

Hi @vdemeester @waveywaves is there an ETA for the review?

@SaschaSchwarze0
Copy link

fyi ... when we implemented this in Shipwright, we used a different approach to access the value eliminating the need to load it but delegating this to Kubernetes. That way, we are also doing this for Secrets, which through the way it is proposed here, certainly is not possible. On the other hand, our approach limits the usage of such parameters to only cmd, args and env.

https://github.com/shipwright-io/community/blob/main/ships/0025-strategy-parameters-enhancements.md

@mostafaCamel
Copy link
Author

fyi ... when we implemented this in Shipwright, we used a different approach to access the value eliminating the need to load it but delegating this to Kubernetes. That way, we are also doing this for Secrets, which through the way it is proposed here, certainly is not possible. On the other hand, our approach limits the usage of such parameters to only cmd, args and env.

https://github.com/shipwright-io/community/blob/main/ships/0025-strategy-parameters-enhancements.md

My understanding Tekton tasks already support value sources (config maps and secrets) for cmd, args and env at the container/step level
https://github.com/tektoncd/pipeline/blob/47bcb0937f2882879c1a5e11492203fe618b730c/docs/variables.md?plain=1#L92

@SaschaSchwarze0
Copy link

fyi ... when we implemented this in Shipwright, we used a different approach to access the value eliminating the need to load it but delegating this to Kubernetes. That way, we are also doing this for Secrets, which through the way it is proposed here, certainly is not possible. On the other hand, our approach limits the usage of such parameters to only cmd, args and env.
https://github.com/shipwright-io/community/blob/main/ships/0025-strategy-parameters-enhancements.md

My understanding Tekton tasks already support value sources (config maps and secrets) for cmd, args and env at the container/step level https://github.com/tektoncd/pipeline/blob/47bcb0937f2882879c1a5e11492203fe618b730c/docs/variables.md?plain=1#L92

For env, sure. That's Kube machinery. I just said that we in Shipwright used that Kube capability to read those parameters that a Shipwright user defined on BuildRun level to come from a secret or configMap into environment variables on the TaskRun that we create for that BuildRun (with exactly what you point to) and then we reference the environment variable value in the param value using $(ENV_VAR) syntax.

Theoretically, Tekton could do the same one level deeper when it replaces the values in the PodSpec. Instead of replacing $(param.my-param) with the value from the ConfigMap key, you would replace it with $(SOME_ENV_VAR_THAT_TEKTON_ADDS) and in the env section of the container, you load the value of that env var from the ConfigMap. And that also works for secrets because you don't have to put the plain value into the Pod spec.

But is just a hint for another option that you may consider or not.

@mostafaCamel
Copy link
Author

Ah ok, I now understand. Thanks for the explanation!

Copy link
Member

@vdemeester vdemeester left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have one important question around the "patching" the object (and thus removing some user input), otherwise 👍🏼

Comment on lines 109 to 113
valueFrom:
configMapKeyRef:
name: game-demo
key: KEY_IN_CONFIGMAP
```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: the indentation seems off.


- At the start of the first reconcile, the value will be obtained from the configmap (the PipelineRun will fail if the value fetching fails).

- After the first reconcile run, the k8s resource PipelineRun object will be patched with the resolved parameter value so that subsequent reconcile runs do not fetch the value from the configmap.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question on this: does this mean the would remove valueFrom and become a value field ? We would loose some information if that's the case.

I would rather keep valueFrom and have a field called resolvedValue (or something) that would contain the value. This way, we do not remove the "user input" that was valueFrom. It adds one more field but it's more explicit and more traceable (we know from where the value came from).

cc @tektoncd/core-maintainers

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @vdemeester. Correct. This means that we are losing this info. I thought about having a resolvedValue field but did not propose it because:

  • There is no getter method for param.Value and the field is accessed directly in tons of occurrences across the codebase. A new field resolvedValue would require the new getter method + a ton of refactoring. I don't think it is worth it but I can be wrong. Let me know what you think.

  • Some info is already lost in the "sub-objects" of the PipelineRun k8s object. Example: the params that are replaced in the PipelineTasks so I didn't see it as a big issue

Somewhat adjacent to this discussion: if we don't go the getter method route, we can add a private (so that it is not (de)serialized)and cannot be set by the user in the yaml) boolean field param.valueFrom.isResolved which is set to true during the value resolution .... and then during the k8s object patching, this will validate that thie value was done by the reconciler and not the user who is applying a patched yaml

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some info is already lost in the "sub-objects" of the PipelineRun k8s object. Example: the params that are replaced in the PipelineTasks so I didn't see it as a big issue

Hmmm, not in the spec.… fields though, those should stay exactly as the user created. If not, I am/would be a bit concerned by that as well…

There is no getter method for param.Value and the field is accessed directly in tons of occurrences across the codebase. A new field resolvedValue would require the new getter method + a ton of refactoring. I don't think it is worth it but I can be wrong. Let me know what you think.

I am really thinking only in terms of the API and what the user sees, not really taking into account implementation "problems" yet 😝. As a user, I think I want to be able to "trace" where the param value was coming from, those I don't really want to loose the information on which configmap and which key it was coming from.

Copy link
Member

@vdemeester vdemeester left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good, my only concern is the mutation at the first reconcile run of valueFrom and thus loosing where the value came from. Aside from this, I would definitely LGTM this.

@mostafaCamel mostafaCamel force-pushed the enable-valuefrom-in-param branch from c34749a to 5b278ea Compare July 9, 2025 15:30
@tekton-robot tekton-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 9, 2025
@mostafaCamel
Copy link
Author

Rebased the branch and added a new commit to address the reviews.

  • The reconciler will no longer unset param.Value after the first reconciler run

I added an explanation in the risks/mitigation section

Copy link
Member

@vdemeester vdemeester left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we are good 👼🏼 There is some implementation detail but the feature proposal looks good to me.


## Proposal

- The user will be able to define one or more `Param`s in a PipelineRun or a TaskRun with `spec.params[*].valueFrom.configMapKeyRef.name` and `spec.params[*].valueFrom.configMapKeyRef.key` instead of the usual hardcoded `spec.params[*].value`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Naive question: we want to allow variable substitution in valueFroms right ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tbh I did not think about it but it should be doable

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, thinking about it again. I am not sure what is the source for the variable substitution in a pipelinerun parameter. The only thing I can think of is TriggerTemplates but TriggerTemplate is deprecated in v1 and the plan (so far) is to implement this new feature in v1 only so there would be no need to support substitution

Comment on lines 121 to 125
### Notes and Caveats

Although it is usually not a good idea to update the PipelineRun k8s resource once the PipelineRun started, the patching of the resource is needed to avoid fetching the configmap at every reconcile run (+ being exposed to failures if the configmap gets deleted).
This also ensures a clear understanding and consistency of when the values are fetched.
The fact that the k8s resource contains the resolved value allows this object to be "standalone" at auditing time and not relying on guessing what the configmap held at execution time.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So there is one thing: we usually store the spec in status.spec, and then we rely on this in the rest of the code (isn't it ?). Changing this one (status.spec.…) would be ok.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should not change the spec or status.spec as those can be used for provenance purposes, but we can include the resolved parameters somewhere else in the status (perhaps as part of the provenance), and the reconciler can check there if the values are available before attempting to read the ConfigMap

@tekton-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: vdemeester

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@tekton-robot tekton-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 17, 2025
Copy link
Member

@afrittoli afrittoli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this, I'm in favour of this feature.

My primary concern is with the proposed plan to update the spec once a ValueFrom is resolved to its value. I agree that we should cache the value, but we can do that in the status, as with any other thing that we cache (e.g. spec of remotely resolved tasks or step actions).

To reference `PipelineRun` or `TaskRun` param value from ConfigMap.

It has following advantages
- Dynamically determining the value of the Param at runtime of PipelineRun/TaskRun instead of "compile time" of PipelineRun/TaskRun
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean by "compile time" here - does it refer to the PipelineRun authoring time?
PipelineRuns definitions are rarely stored in git unless they are templates to be filled in through trigger bindings, so the value of the parameters is often defined when the PipelineRun is applied to the cluster.

Still, there is a slight difference between that moment and the time when the PipelineRun is reconciled and executed, as the webhook validation is invoked when the PipelineRun is applied to the cluster.

I see below that validation will not attempt to resolve ConfigMap based parameters and apply validation to them, i.e. the ConfigMap may not exist on the cluster when the PipelineRun is applied to the cluster, only execution will validate that.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct, bad terminology on my end.
Given that the webhook validation cannot verify the ConfigMap, the values will be resolved only during the first reconciler run.

params:
- name: username
valueFrom:
configMapKeyRef:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NIT: indentation seems off

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is namespace going to be supported as well, or will PipelineRuns only be able to access ConfigMaps sharing the same namespace?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not think about it but it should be doable (namespace property in configMapKeyRef with a default fallback to the same namespace of the TaskRun/PipelineRun


- At the start of the first reconcile, the value will be obtained from the configmap (the PipelineRun will fail if the value fetching fails).

- After the first reconcile run, the k8s resource PipelineRun object will be patched with the resolved parameter value so that subsequent reconcile runs do not fetch the value from the configmap.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sounds good. I would expect the resolved values to be added to the PipelineRun and not replace the original configMapKeyRef - is that the plan?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct

Comment on lines 121 to 125
### Notes and Caveats

Although it is usually not a good idea to update the PipelineRun k8s resource once the PipelineRun started, the patching of the resource is needed to avoid fetching the configmap at every reconcile run (+ being exposed to failures if the configmap gets deleted).
This also ensures a clear understanding and consistency of when the values are fetched.
The fact that the k8s resource contains the resolved value allows this object to be "standalone" at auditing time and not relying on guessing what the configmap held at execution time.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should not change the spec or status.spec as those can be used for provenance purposes, but we can include the resolved parameters somewhere else in the status (perhaps as part of the provenance), and the reconciler can check there if the values are available before attempting to read the ConfigMap

Per [feature-versioning.md](https://github.com/tektoncd/pipeline/blob/5b082b1106753e093593d12152c82e1c4b0f37e5/docs/developers/feature-versioning.md) , the new feature will be guarded behind a feature flag with an alpha stability (turned off by default)

### CRD API Version
Per the [API compatibility policy](https://github.com/tektoncd/pipeline/blob/5b082b1106753e093593d12152c82e1c4b0f37e5/api_compatibility_policy.md#crd-api-versions), the feature will be implemented only in the v1 CRD as `[v1beta1 is] deprecated. New features will not be added, but they will receive bug fixes`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be problematic; it can lead to unexpected behaviours.
Until we serve v1beta1, even if it's deprecated, we need to add new features there as well.
@vdemeester WDYT?



### New API Object's validation
- Changes to the [implementation](https://github.com/tektoncd/pipeline/blob/5b082b1106753e093593d12152c82e1c4b0f37e5/pkg/apis/pipeline/v1/param_types.go#L493) of the custom methods `*ParamValue.UnmarshalJSON` and `ParamValue.MarshalJSON` to allow empty `ParamValue` fields
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, but only if a ref is provided instead

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pipeline project has recently adopted go1.24 which has a new omitzero feature that we can add tot he ParamValue field inside the Param struct

So we can skip the changes to the custom marshalling methods (but we will still need to verify in the validation either ParamValue or ValueFrom are passed)


### New API Object's validation
- Changes to the [implementation](https://github.com/tektoncd/pipeline/blob/5b082b1106753e093593d12152c82e1c4b0f37e5/pkg/apis/pipeline/v1/param_types.go#L493) of the custom methods `*ParamValue.UnmarshalJSON` and `ParamValue.MarshalJSON` to allow empty `ParamValue` fields
- There is a [roundTripPatch](https://github.com/tektoncd/pipeline/blob/5b082b1106753e093593d12152c82e1c4b0f37e5/vendor/knative.dev/pkg/webhook/resourcesemantics/defaulting/defaulting.go#L324) which occurs before the validation (i.e the yaml is deserialized into an object, then serialized, then desrialized again then the validation occurs). So the change in the implementation of these custom methods is needed (there is no workaround).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which custom methods are you referring to?
If changes are needed on Knative side, it will complicate things and we should start conversations with the Knative community about the proposed change asap.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My bad. I meant the *ParamValue.UnmarshalJSON and ParamValue.MarshalJSON methods in our source code. No need to change anything on Knative.

- The webhook cannot poll for the configmap existence so this check will be done by the reconciler later (in the controller).

### Fetching values from the configmap
This value resolution will be done one time only (if param.ValueFrom is defined and param.Value is empty).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I mentioned before, I think we should not set param.Value after resolution, and we should validate that Value and ValueFrom are never both set.
Updates from the controller should go into the status and not the spec. Only the mutating webhook may update the spec before a resource is applied to the cluster.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Andrea for all these comments. I will go over them in the next day or 2.

As you mentioned in your review summary, the setting of param.Value is the main sticking point for this PR. My main fear now (but this may be the only acceptable solution) is that a refactoring the entire codebase may be needed to replace param.Value (used to access the value in the codebase) by a new getter method that will either read param.Value or the cached resolved value of valueFrom

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could another option would be to populate param.Value during the first mutating webhook call (before the initial persistence of PipelineRun) by adding some custom logic in the defaulting (also called mutating) admission webhook ? We are calling knative's defaulting.NewAdmissionController with some arguments

- look for the configmap with name `Param.ValueFrom.ConfigMapKeyRef.Name` inside the same k8s namespace of the PipelineRun/TaskRun. PipelineRun to fail if the configmap is not found
- inside the configmap, look for the key `Param.ValueFrom.ConfigMapKeyRef.Key`. PipelineRun to fail if the key is not found and if `Param.ValueFrom.ConfigMapKeyRef.Optional` is false (this is the default value of this bool)
- Marshal the value obtained from the configmap into a byte array then convert it into a `ParamValue` object via the `*ParamValue.UnmarshalJSON` custom method
- Create a deep copy of the param then set `Param.Value` to the unmarshalled object from the step above
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto, I think this should go in the status, not in the spec.

- If the check is true:
- patch the k8s resource with the new Params array (by calling `c.PipelineClientSet.TektonV1().PipelineRuns(pr.Namespace).Patch`)

#### Relaxing the validation rules for the PipelineRun/TaskRun resource patch/updates
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is not needed if we cache the value in the status instead

@mostafaCamel
Copy link
Author

mostafaCamel commented Jul 20, 2025

The eventual getter method I would need to implement (to get either the user-provided param.Value or the resolved-then-stored-somewhere value from the value source) will be used to replace every invocation similar to this (i.e every time the code tries to access param.Value from pr.Spec.Params

@mostafaCamel
Copy link
Author

Proposed solution

TLDR: A new boolean field pipelineRun.Status.AttemptedValueSourceResolution set to true during the first reconcile (along with querying the configmap and populating the value). I have a a poc on a local breanch: The CRD update will occur without any issue or any need to override (for the param value and for Status.AttemptedValueSourceResolution ). In subsequent reconciles, nothing will occur given that the querying of the configmap and the populating is guarded behind if !pipelineRun.Status.AttemptedValueSourceResolution

This status check is needed to query the ConfigMap only once and to avoid the Once the PipelineRun has started, only status updates are allowed error

Why the mutating webhook is probably not a good solution?

The webhook is unaware of all the non-tekton-pipelines namespaces in the cluster so will be unable to read the user-defined configmaps (in the default namepsace or in other non-tekton-pipelines namespaces). Otherwise, it would have been an attractive solution given that the webhook calls our SetDefaults in which we can leverage apis.IsInCreate(ctx) (similar to here) which is true only at the first run of the webhook on the PipelineRun object

clientset.CoreV1().ConfigMaps(tr.Namespace).Get(ctx, "myhardcodedconfigmapname", metav1.GetOptions{}) will fail with configmaps "myhardcodedconfigmapname\ is forbidden: User "system:serviceaccount:tekton-pipelines:tekton-pipelines-webhook" cannot get resource "configmaps in API group "" in the namespace "default"
This due to the fact that the webhook is set to be scoped with its namespace only (as opposed to the controller which is set to know about all the namespaces in the cluster

This proposal is to be able to reference `ConfigMap` as a value source for a `Param` in `TaskRun` or `PipelineRun` .
This is to support Kubernetes native options (ConfigMap) as value source along with direct value passed to `TaskRun` and `PipelineRun`.

Note: this proposal is basically picking up upon the closed (unmerged) [proposal](tektoncd#868).

Signed-off-by: Mostafa Abdelwahab <[email protected]>
Signed-off-by: Mostafa Abdelwahab <[email protected]>
@mostafaCamel mostafaCamel force-pushed the enable-valuefrom-in-param branch from 5b278ea to 121dab7 Compare July 25, 2025 03:06
Signed-off-by: Mostafa Abdelwahab <[email protected]>
@mostafaCamel
Copy link
Author

Had a discussion with the maintainers in the Tekton working group meeting. I will look into how much work would be needed into refactoring the codebase/ implementing a getter method for pipelineRun.Spec.Param . This would open the possibility to fetch the configMap value into a field under pipelineRun.Status and not editing pipelineRun.Spec.param during the reconcile. Then, the getter method can read either pipelineRun.Spec.Param or the new field under pipelineRun.Status

@mostafaCamel
Copy link
Author

mostafaCamel commented Aug 6, 2025

Apologies for the delay. I took a quick look at the feasibility of implementing a getter method for PipelineRun.PipelineRunSpec.Params. I think the mutating webhook can be the way to go (if there is no security concern about opening up access to the webhook for all the namespaces. (See the comment above)

TLDR: I don't think it is possible and I think we should discuss the idea of opening up access of the mutating webhook to all namespaces in the cluster and not just the tekton-pipelines namespace.

  • Reasons I think implementing the getter method is not feasible/worth the hassle:

    • There is no way to enforce future developers to use the getter method instead of accessing the PipelineRun.PipelineRunSpec.Params field directly. If we switch Params to be a private field (hence forcing the use of the getter method), this field will no longer be (un)marshalled.
      • There seems to be an obtuse way to customize the marshalJSON method to achieve the the marshallability of private fields but not sure if this change is worth the final goal
      • There is a non-std go library to (un)marshal private fields but it does not seem broadly adopted in the go community
    • Changing the method signature of many helper functions
      • Example: func (ps *PipelineRunSpec) validatePipelineRunParameters(ctx context.Context) (which fetches the Params ps.Params) is called by func (ps *PipelineRunSpec) Validate ..... If we want to access the actual param, we need to make these functions aware of PipelineRun.Status
      • Theoretically, we can make this change as PipelineRunSpec is not a k8s CRD so there is no "hard" requirement for the method signature func (ps *PipelineRunSpec) Validate(ctx context.Context) but again I don't think the final goal is worth breaking the consistency of these method signatures
  • In terms of number of uses of pipelineRunSpec.Params, I was pleasantly surprised there are not that many of them. See screenshot for the references of v1.PipelineRunSpec.Params (the vscode extension is probably using this analysis package)

@tekton-robot tekton-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Aug 12, 2025
@tekton-robot
Copy link
Contributor

@mostafaCamel: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants