Conversation
This may or may not work, depending on which packages have tests and whether they contain glog.
Individual repos may have to filter out certain packages from testing. For example, in csi-test the cmd/csi-sanity directory contains a special test that depends on additional parameters that set the CSI driver to test against.
After merging into external-attacher, the next Travis CI run did not push the "canary" image because the check for "canary" only covered the case where "-canary" is used as suffix (https://travis-ci.org/kubernetes-csi/external-attacher/builds/484095261).
The introduction for each individual test looked like an actual command: test-subtree ./release-tools/verify-subtree.sh release-tools Directory 'release-tools' contains non-upstream changes: ... It's better to make it look like a shell comment and increase its visibility with a longer prefix: ### test-subtree: ./release-tools/verify-subtree.sh release-tools ...
build.make: fix pushing of "canary" image from master branch
test enhancements
If for whatever reasons a repo already had a `release-tools` directory before doing a clean import of it with `git subtree`, the check used to fail because it found those old commits. This can be fixed by telling `git log` to stop when the directory disappears from the repo. There has to be a commit with removes the old content, because otherwise `git subtree add` doesn't work. Fixes: kubernetes-csi/external-resizer#21
verify-subtree.sh: relax check and ignore old content
In repos that have a test/e2e, that test suite should be run separately because it depends on a running cluster.
This is an unmodified copy of kubernetes/hack/verify-shellcheck.sh revision d5a3db003916b1d33b503ccd2e4897e094d8af77.
This runs "dep check" to verify that the vendor directory is up-to-date and meets expectations (= done with dep >= 0.5.0).
check vendor directory
In repos that have a test/e2e, that test suite should be run separately because it depends on a running cluster.
build.make: avoid unit-testing E2E test suite
These are the modifications that were necessary to call this outside of Kubernetes. The support for excluding files from checking gets removed to simplify the script. It shouldn't be needed, because linting can be enabled after fixing whatever scripts might fail the check.
By default this only tests the scripts inside the "release-tools" directory, which is useful when making experimental changes to them in a component that uses csi-release-tools. But a component can also enable checking for other directories.
This enables testing of other repos and of this repo itself inside
Prow. Currently supported is unit testing ("make test") and E2E
testing (either via a local test suite or the Kubernetes E2E test
suite applied to the hostpath driver example deployment).
The script passes shellcheck and uses Prow to verify that for future
PRs.
verify shellcheck
The travis.yml is now the only place where the Go version for the component itself needs to be configured.
Using the same (recent) Go version for all Kubernetes versions can break for older versions when there are incompatible changes in Go. To avoid that, we use exactly the minimum version of Go required for each Kubernetes version. This is based on the assumption that this combination was tested successfully. When building the E2E suite from Kubernetes (the default) we do the same, but still allow building it from elsewhere. We allow the Go version to be empty when it doesn't matter.
While switching back and forth between release-1.13 and release-1.14 locally, there was the problem that the local kind image kept using the wrong kubelet binary despite rebuilding from source. The problem went away after cleaning the Bazel cache. Exact root cause unknown, but perhaps using unique tags and properly cleaning the repo helps. If not, then the unique tags at least will show what the problem is.
Instead of always using the latest E2E tests for all Kubernetes versions, the plan now is to use the tests that match the Kubernetes version. However, for 1.13 we have to make an exception because the suite for that version did not support the necessary --storage.testdriver parameter.
The temporary fork was merged, we can use the upstream repo again.
Prow testing
prow.sh: remove AllAlpha=all, part II
|
/test pull-kubernetes-csi-node-driver-registrar-1-13-on-kubernetes-1-13 /hold cancel |
build.make: Replace 'return' to 'exit' to fix shellcheck error
|
/test pull-kubernetes-csi-node-driver-registrar-1-13-on-kubernetes-master |
This pulls in kubernetes-csi/csi-driver-host-path#37, which turned out to be necessary for pre-submit testing of the livenessprobe.
prow.sh: update csi-driver-host-path
|
/test pull-kubernetes-csi-node-driver-registrar-1-13-on-kubernetes-master |
Whether a component supports sanity testing depends on the component. For example, csi-driver-host-path enables it because it makes sense there (and only there). Letting the prow.sh script decide whether it actually runs simplifies the job definitions in test-infra.
prow.sh: skip sanity testing if component doesn't support it
|
/test pull-kubernetes-csi-node-driver-registrar-1-13-on-kubernetes-1-13 |
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: msau42, pohly The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
…cy-openshift-4.12-csi-node-driver-registrar Updating csi-node-driver-registrar images to be consistent with ART
No description provided.