The kubernetes-scanner watches the configured types on a Kubernetes API server and will send resource configurations to Snyk.
This is just a data collection component that is part of a larger system. You only need to install this if you have been directed here from other documentation.
There is a Helm chart within this repo in
helm/kubernetes-scanner,
that is hosted through Github pages in
https://snyk.github.io/kubernetes-scanner
.
Initially you need to create a kubernetes secret that contains the API token for the service account
The service account must have one of the following roles:
- Org Admin
- Group Admin
- Custom Role with "Publish Kubernetes Resources" permission
If organization level service account is used, it must be associated with the organizationID configured to correlate the kubernetes data. Group level service accounts can correlate data to any organization under the group.
kubectl create secret generic <secret name> \
--from-literal=snykServiceAccountToken=<your-service-account-token>
To install the Helm chart with all default values:
helm repo add kubernetes-scanner https://snyk.github.io/kubernetes-scanner
# Update repository if it already exists
helm repo update
helm install <release-name> \
--set "secretName=<secret name>" \
--set "config.clusterName=<your human friendly cluster name>" \
--set "config.routes[0].organizationID=<your Snyk organization ID>" \
--set "config.routes[0].clusterScopedResources=true" \
--set "config.routes[0].namespaces[0]=*" \
kubernetes-scanner/kubernetes-scanner
The actor running Helm needs to be empowered to create the resources templated by this chart.
Or using chart dependencies:
# Chart.yaml
dependencies:
- name: kubernetes-scanner
version: v0.22.0 # use the latest available version
repository: https://snyk.github.io/kubernetes-scanner
alias: kubernetes-scanner
Release versions can be found in GitHub.
For further information on how to install and configure kubernetes-scanner, please familiarize yourself with the commented configuration in values.yaml.
There are some mandatory fields, each marked with "MANDATORY" in a comment.
See monitoring.md.
While Kubernetes v1.Secrets
are designed to separate secret fields into their
own types, referenced in other types' fields such as container environment
variable secretKeyRefs
, some users may wish to prevent certain fields on
certain types from being sent to Snyk, while still scanning other fields on that
type.
The scanner supports this sort of attribute removal, as an optional array of paths alongside group-version-kind scan config:
- apiGroups: ["apps"]
versions: ["*"]
resources:
- replicasets
- daemonsets
- deployments
- statefulsets
attributeRemovals:
- "spec.template.spec.containers.env"
- "metadata.managedFields"
These paths are dot-separated address for nested values, in the same format as
arguments to kubectl explain
. For example, the expression
"spec.containers.env" will cause Kubernetes Pod container environment variables
to be removed. "containers" is an array, and each element of this array is
redacted in this way.
See values.yaml for examples in Helm values.
Some users may want the scanner to send its HTTP requests via a proxy, for
example in order to ensure that it is only communicating with an allowlist of
hosts. The scanner supports the standard HTTPS_PROXY
environment variable in
order to accomplish this. Helm users can set it via the extraEnv value, in your
values file, for example, for example:
extraEnv:
- name: HTTPS_PROXY
value: "a-proxy:3128"
You will need to allowlist both Snyk's API server, and your Kubernetes API
server. Snyk HTTP requests are sent to
https://$HOST/hidden/orgs/$ORG_ID/kubernetes_resources?version=$API_VERSION
.
$HOST
will be api.snyk.io
(unless otherwise communicated). You might use
this to form the basis of a proxy ACL rule. Example for squid
proxy, that denies all other traffic:
acl allowlist dstdomain api.snyk.io <don't forget to add your Kubernetes domain>
http_access allow allowlist
http_access deny all
http_port 3128
$ORG_ID
is one of your configured org IDs, from the "routes" section of your
configuration.
The value of the $API_VERSION
query parameter should not be depended on, it
may change in subsequent scanner versions.
You only need to read this section if you are interested in contributing to this project.
kubernetes-scanner is built on top of controller-runtime
and uses
controller-runtime
's envtest
to run tests against a real API Server. To run
the tests, you will need to have the kube-apiserver
and etcd
binaries
installed in /usr/local/kubebuilder/bin
or set the KUBEBUILDER_ASSETS
environment variable to where your binaries are located in.
To help with this, you can install setup-envtest
, which can be installed
through the go install
command:
go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest
After the installation is completed, the following command will download the
kube-apiserver
and etcd
binaries and populate the KUBEBUILDER_ASSETS
environment variable:
eval "$(setup-envtest use -p env)"
For more information on setup-envtest
, we refer to
their documentation.
If you want to run smoke tests locally, you will need:
- Tilt
- a working kubernetes environment, compatible with Tilt
- an organization that is allowed to send Kubernetes resources
- a service account token, attached to that organisation, that has access to send Kubernetes resources.
Note that the smoke tests run against https://api.dev.snyk.io
by default,
which means both the organisation and service account token need to be created
in that environment!
With these prerequisites, the smoke tests can be run like this:
env SNYK_API=https://api.dev.snyk.io \
TEST_SNYK_SERVICE_ACCOUNT_TOKEN=<token> \
TEST_ORGANIZATION_ID=<org ID> \
TEST_SMOKE=true \
go test -v .
For an overview of the architecture of kubernetes-scanner, please see the architecture document