-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Continuously test Kubernetes against Go tip #1399
Comments
What phases of a release process are we likely to accept a new Go version? Having a CI job tracking Go tip might not be worth the flakiness of testing bleeding edge features-- would tracking its release tags (1.8-beta1, 1.8-beta2, etc) make more sense? |
@rmmh Against HEAD. This becomes so much more critical when issues like kubernetes/kubernetes#45216 occur. We could just have a special
and then just build bins with |
Yes, please, against HEAD. The point is to help find Go bugs the same day they're introduced, rather than 8 months after it's too late to do anything about them. |
Which Kubernetes tests do we run against Go tip? We certainly can't duplicate all of our testing. |
@ixdy: These labels do not exist in this repository: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
oh, also, this should probably be an issue in the kubernetes/kubernetes repo, not test-infra. |
@ixdy Why kubernetes/kubernetes? I think the scalability team can be the owner of this, and we can run kubemark against Go tip as a non-blocking test suite. |
It's a fine line, but this issue is more about testing Kubernetes than about the test-infra supporting Kubernetes. It might get more visibility in the kubernetes repo. @bradfitz are there any binary artifacts of the toolchain from Go's CI that we could use, rather than continuously rebuilding the Go toolchain ourselves? We have Bazel 98% working and it'd be really handy if I could just point |
There are binary artifacts we keep around from our CI, but we don't yet(?) have a supported/stable interface for making them available to others, despite their URLs currently being public: curl --silent https://storage.googleapis.com/go-build-snap/go/linux-amd64/2d429f01bd917c42e66e1991eab9c2e33d813d16.tar.gz | tar -zt But running |
go1.9beta1 is out, let's start testing... |
Anyone interested in creating a job for this? |
@ixdy any bazel magic we could use here? |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
@luxas rules_go has some logic for selecting the toolchain and registering it so we could make this job start by patching WORKSPACE to use a toolchain from head. https://github.com/bazelbuild/rules_go/blob/master/go/toolchains.rst |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale
…On Sun, Jul 8, 2018, 12:17 fejta-bot ***@***.***> wrote:
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually
close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta
<https://github.com/fejta>.
/lifecycle stale
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#1399 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AA4Bq5RwxiinInbgJaJr7oo8A1QHw4n9ks5uElrRgaJpZM4LP8iS>
.
|
I would say that if it's supposed to be useful, only a copy of kubemark-gce-scale makes sense. |
IMO it would be better if we could somehow leverage our large-scale (optional) presubmits - for e.g trigger them against a test PR changing the go version in k8s. Adding newer dimensions to scalability CI-testing (unless it's very needed) may not be too scalable. An alternative approach I'd suggest is to setup kubemark CI-testing in a project owned by golang (this would be more convenient and scalable too imo). Wdyt? |
I don't fully agree. It kind of adds additional dimension, but that should be implicitly owned by golang team. So that should be a regular conitnuous testing job (though being run once per week or so).
|
I don't think it should be managed by the golang team, unit testing maybe, but kubemark / end to end testing etc. require a lot more resources and complexity to run. We can report issues back to them though. I'll hold off on scalability for now while there is still discussion, but sig-testing/sig-release can own something like the conformance suite against golang tip in the meantime to get things started. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hi,
I'm trying to upgrade the Go version for ARM in PR kubernetes/kubernetes#38926
It would be very good to test current master of golang/go against the k8s codebase.
We could have a ci job that fetches latest HEAD of Go, builds a slightly modified version of the cross-image locally, compiles k8s, spins up a lightweight cluster (maybe just with hack/local-up-cluster if we're lazy), and runs the conformance test suite on it.
This way we would catch breaking changes in Go very early on, not then when we're trying to upgrade from 1.x => 1.(x+1).
Is this something we can do soon?
@ixdy @spxtr @rmmh @jessfraz
The text was updated successfully, but these errors were encountered: