kube-trigger is a workflow based trigger that combines listeners, filter events and action triggers in a programmable way with CUElang.
Although there is kube
in the name, it is actually not limited to Kubernetes and can do much more than that. It has an
extensible architecture that can extend its capabilities fairly easily. We have docs (not yet) on how to
extend Source, Filter, and Action. All users are welcomed to contribute their own
extensions.
A Source is what listens to events (event source). For example, a resource-watcher
source can watch Kubernetes
resources from multi-clusters. Once a Kubernetes resource (e.g. Deployment) is changed, it will raise an event that will be passed
to Filter for further processing.
source:
type: resource-watcher
properties:
apiVersion: apps/v1
kind: Deployment
events:
- update
A Filter will filter the events that are raised by Source, i.e, drop events that do not satisfy a certain criteria. For example, users can check the status of the Deployment to decide whether to filter out events. All the events that passed the Filter will then trigger an Action.
filter: context.data.status.readyReplicas == context.data.status.replicas
An Action is a job that does what the user specified when an event happens. For example, the user can send notifications, log events, execute a command, or patch some Kubernetes objects when an event happens.
For example, you can use the built-in action patch-resource
to patch resources in Kubernetes like:
action:
type: patch-resource
properties:
resource:
apiVersion: v1
kind: ConfigMap
name: my-cm
namespace: default
patch:
type: merge
data:
data:
foo: bar
The underlying mechanism of action is using CUE to render the template and execute it with your parameters. For example, the above action can be written in CUE as:
import (
"vela/kube"
)
patchObject: kube.#Patch & {
$params: {
resource: {
apiVersion: parameter.resource.apiVersion
kind: parameter.resource.kind
metadata: {
name: parameter.resource.name
namespace: parameter.resource.namespace
}
}
patch: parameter.patch
}
}
// users' parameters to be passed to the action
parameter: {
// +usage=The resource to patch
resource: {
// +usage=The api version of the resource
apiVersion: string
// +usage=The kind of the resource
kind: string
// +usage=The metadata of the resource
metadata: {
// +usage=The name of the resource
name: string
// +usage=The namespace of the resource
namespace: *"default" | string
}
}
// +usage=The patch to be applied to the resource with kubernetes patch
patch: *{
// +usage=The type of patch being provided
type: "merge"
data: {...}
} | {
// +usage=The type of patch being provided
type: "json"
data: [{...}]
} | {
// +usage=The type of patch being provided
type: "strategic"
data: {...}
}
}
To quickly know the concepts of kube-trigger, let's use a real use-case as an exmaple (
see #4418). TL;DR, the user want the Application to be automatically
updated whenever the ConfigMaps that are referenced by ref-objects
are updated.
To accomplish this, we will:
- use a
resource-watcher
Source to listen to update events of ConfigMaps - filter the events to only keep the ConfigMaps that we are interested in
- trigger an
bump-application-revision
Action to update Application.
And the trigger config file will look like:
triggers:
- source:
type: resource-watcher
properties:
# We are interested in ConfigMap events.
apiVersion: "v1"
kind: ConfigMap
namespace: default
# Only watch update event.
events:
- update
filter: |
context: data: metadata: name: =~"this-will-trigger-update-.*"
action:
# Bump Application Revision to update Application.
type: bump-application-revision
properties:
namespace: default
# Select Applications to bump using labels.
nameSelector:
fromLabel: "watch-this"
See examples directory for more instructions.
You can run kube-trigger in two modes: standalone and in-cluster.
A config file instructs kube-trigger to use what Source, Filter, and Action, and how they are configured.
No matter you are running kube-trigger as standalone or in-cluster, the config format is similar, so it is beneficial to know the format first. We will use yaml format as an example (json and cue are also supported).
# A trigger is a group of Source, Filter, and Action.
# You can add multiple triggers.
triggers:
- source:
type: <your-source-type>
properties: ...
# ... properties
filter: <your-filter>
action:
type: <your-action-type>
properties: ...
When running in standalone mode, you will need to provide a config file to kube-trigger binary.
kube-trigger can accept cue
, yaml
, and json
config files. You can also specify a directory to load all the
supported files inside that directory. -c
/--config
cli flag and CONFIG
environment variable can be used to specify
config file.
An example config file looks like this:
# A trigger is a group of Source, Filters, and Actions.
# You can add multiple triggers.
triggers:
- source:
type: resource-watcher
properties:
# We are interested in ConfigMap events.
apiVersion: "v1"
kind: ConfigMap
namespace: default
# Only watch update event.
events:
- update
# Filter the events above.
filter: |
context: data: metadata: name: =~"this-will-trigger-update-.*"
action:
# Bump Application Revision to update Application.
type: bump-application-revision
properties:
namespace: default
# Select Applications to bump using labels.
nameSelector:
fromLabel: "watch-this"
Let's assume your config file is config.yaml
, to run kube-trigger:
./kube-trigger -c=config.yaml
CONFIG=config.yaml ./kube-trigger
We have one CRD called TriggerService. TriggerInstance is what creates a kube-trigger instance (similar to running ./kube-trigger
in-cluster but no config with config specified in its spec.
# You can find this file in config/samples/standard_v1alpha1_triggerservice.yaml
apiVersion: standard.oam.dev/v1alpha1
kind: TriggerService
metadata:
name: kubetrigger-sample-config
namespace: default
spec:
selector:
instance: kubetrigger-sample
triggers:
- source:
type: resource-watcher
properties:
apiVersion: "v1"
kind: ConfigMap
namespace: default
events:
- update
filter: |
context: data: metadata: name: =~"this-will-trigger-update-.*"
action:
type: bump-application-revision
properties:
namespace: default
# Select Applications to bump using labels.
nameSelector:
fromLabel: "watch-this"
In addition to config files, you can also do advanced configurations. Advanced kube-trigger Configurations are internal configurations to fine-tune your kube-trigger instance. In most cases, you probably don't need to fiddle with these settings.
Frequently-used values: debug
, info
, error
Default: info
CLI | ENV | KubeTrigger CRD |
---|---|---|
--log-level |
LOG_LEVEL |
TODO |
Re-run Action if it fails.
Default: false
CLI | ENV | KubeTrigger CRD |
---|---|---|
--action-retry |
ACTION_RETRY |
TODO |
Max retry count if an Action fails, valid only when action retrying is enabled.
Default: 5
CLI | ENV | KubeTrigger CRD |
---|---|---|
--max-retry |
MAX_RETRY |
.spec.workerConfig.maxRetry |
First delay to retry actions in seconds, subsequent delay will grow exponentially, valid only when action retrying is enabled.
Default: 2
CLI | ENV | KubeTrigger CRD |
---|---|---|
--retry-delay |
RETRY_DELAY |
.spec.workerConfig.retryDelay |
Long-term QPS limiting per Action worker, this is shared between all watchers.
Default: 2
CLI | ENV | KubeTrigger CRD |
---|---|---|
--per-worker-qps |
PER_WORKER_QPS |
.spec.workerConfig.perWorkerQPS |
Queue size for running actions, this is shared between all watchers
Default: 50
CLI | ENV | KubeTrigger CRD |
---|---|---|
--queue-size |
QUEUE_SIZE |
.spec.workerConfig.queueSize |
Timeout for running each action in seconds.
Default: 10
CLI | ENV | KubeTrigger CRD |
---|---|---|
--timeout |
TIMEOUT |
.spec.workerConfig.timeout |
Number of workers for running actions, this is shared between all watchers.
Default: 4
CLI | ENV | KubeTrigger CRD |
---|---|---|
--workers |
WORKERS |
.spec.workerConfig.workerCount |
Cache size for filters and actions.
Default: 100
CLI | ENV | KubeTrigger CRD |
---|---|---|
--registry-size |
REGISTRY_SIZE |
.spec.registrySize |
- Basic build infrastructure
- Complete a basic proof-of-concept sample
- linters, license checker
- GitHub Actions
- Rate-limited worker
- Make the configuration as CRD, launch new process/pod for new watcher
- Notification for more than one app: selector from compose of Namespace; Labels; Name
- Refine README, quick starts
- Refactor CRD according to #2
Code enhancements
- Add missing unit tests
- Add missing integration tests
User experience
- Refine health status of CRs
- Make it run as Addon, build component definition, and examples
- Kubernetes dynamic admission control with validation webhook
- Auto-generate usage docs of Sources, Filters, and Actions from CUE markers
- Show available Sources, Filters, and Actions in cli
Webhook support
- Contribution Guide
- New Action: webhook
- New Source: webhook
Observability
- New Action: execute VelaQL(CUE and K8s operations)
- New Source: cron
- New Action: notifications(email, dingtalk, slack, telegram)
- New Action: log (loki, clickhouse)
- Allow user set custom RBAC for each TriggerInstance
- New Action: workflow-run
- New Action: execute-command
- New Action: metric (prometheus)
- Refine controller logic
- Remove cache informer, make it with no catch but list watch events with unique queue.