-
-
Notifications
You must be signed in to change notification settings - Fork 319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Silent error getting kube config in-cluster #331
Comments
It's not a lot the Weird that you are not getting Error information from the call. It should return an Error (and clearly it's crashing). What produces the [server] and [k8s EVENT] output? This doesn't look like straight dependency wise, if you are not using rustls, you can take out the |
Hi @clux, thanks for getting back to me so quickly. Sorry, the output is from Tilt, which I'm using to deploy and pipe back the logs. But
Yeah, it's the lack of logs that's really getting me. I haven't created an explicit service account for it yet, will try giving it a service account and see if that was throwing it for a loop somehow. Ta for the pointer on default-features :) |
Nope, explicit serviceaccount with explicit |
are you associating the deployment with the service account? |
Yup: apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: auth-controller
app.kubernetes.io/managed-by: tilt
name: auth-controller
namespace: ponglehub
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: auth-controller
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: auth-controller
app.kubernetes.io/managed-by: tilt
tilt.dev/pod-template-hash: ad41af2b3e7eaa93efd5
spec:
containers:
- image: auth-controller:tilt-20f9932817b95f85
imagePullPolicy: IfNotPresent
name: server
resources:
limits:
cpu: 100m
memory: 32Mi
requests:
cpu: 100m
memory: 32Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: auth-controller
serviceAccountName: auth-controller
terminationGracePeriodSeconds: 30 |
And k9s is saying that the service account tokens are mounted: |
Ok, that should be fine then 🤔 I would try to use |
Same from kubectl, last line is If you could try to replicate, that would be amazing! Code above is literally everything, I deleted everything extraneous I could find :) |
you can turn on traces by tuning an evar; |
Thanks, am still getting my head round a lot of how rust works! Turning on the trace gives no more info, which I guess means it's falling over somewhere before it gets to the error path 😞 If I get a chance I'll try pulling the kube-rs source and dropping some more traces in to see if I can pinpoint where it's coming unstuck |
What exit code and reason does |
FWIW it seems to run fine for me in K3d 0.8.0.6, but I couldn't get Tilt to cooperate. |
Thanks @teozkr, that confirms that there's no reason it shouldn't work in principle.
I was on brew's k3d v3.0.1, and v3.1.5 is available now. Will try rolling the version tomorrow and see what happens... 🤞 FYI: On the tilt front, there's a script of theirs that got it working with k3d for me: https://github.com/tilt-dev/k3d-local-registry/ |
139 indicates a segfault.. weird. What's your stack size inside the pod?
…On Fri, 23 Oct 2020, 23:16 benjamin-wright, ***@***.***> wrote:
Thanks @teozkr <https://github.com/teozkr>, that confirms that there's no
reason it shouldn't work in principle.
State: Terminated
Reason: Error
Exit Code: 139
Started: Fri, 23 Oct 2020 21:58:11 +0100
Finished: Fri, 23 Oct 2020 21:58:11 +0100
Last State: Terminated
Reason: Error
Exit Code: 139
Started: Fri, 23 Oct 2020 21:57:57 +0100
Finished: Fri, 23 Oct 2020 21:57:57 +0100
I was on brew's k3d v3.0.1, and v3.1.5 is available now. Will try rolling
the version tomorrow and see what happens... 🤞
FYI: On the tilt front, there's a script of theirs that got it working
with k3d for me: https://github.com/tilt-dev/k3d-local-registry/
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#331 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAE6U2E6KEAEUUER7Q3U2QLSMHXBJANCNFSM4S4ICFWA>
.
|
Found some more info from valgrind while trying to figure out how to get the stack size, it's a bit chonky but kinda interesting:
Looks like something weird going on in openssl, will carry on digging but thought I'd post this up in-case it's obvious to someone! |
What's your Dockerfile and Cargo.lock? I was using https://gist.github.com/teozkr/00f2106ba4bd18d05cc5d5ada681e098 |
https://gist.github.com/benjamin-wright/4bdf55dc304fa82c388fa93f25df40c9 There's a dockerfile that defines the build environment, the command in build.sh which creates the binary, then another dockerfile which makes the image. (I've got this setup rather than a multistage build only because I was mucking about with making a simple multi-language monorepo build tool, and up until incorporating rust it had been quite convenient to build the artefacts locally with this tool, then let tilt deal with building the images.) |
Okay, I've been able to narrow it down to the following: Cargo.toml: [package]
name = "kube-331"
version = "0.1.0"
authors = ["Teo Klestrup Röijezon <[email protected]>"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
openssl-sys = "0.9.58"
[profile.release]
opt-level = 0
debug = true
debug-assertions = false
overflow-checks = false
lto = false
panic = 'unwind'
incremental = true
codegen-units = 256
rpath = true main.rs: fn main() {
openssl_sys::init();
} Curiously, it doesn't seem to happen when I build and run it inside of the same container... |
Oh, you can't use openssl in alpine if you are not building openssl with musl. You need something like muslrust for that. |
Although, you are grabbing openssl-dev from apk. In theory it looks okay, but I've never managed to get that combo to work, which is why muslrust is cross compiling from ubuntu. |
You might be able to make it work if you set a bunch of evars and have pkgconfig installed so that the openssl-sys build script can detect openssl-dev, but it's a bit of a rabbit whole. Generally why people don't compile straight from alpine if they have C dependencies. |
Oh hey, found the difference. Turns out, it seems to work if the builder installs |
Oh wow, ok, that's a less horrible solution :D |
Is there anything you'd like to do off the back of this ticket? Happy to close as resolved otherwise |
I might leave a caveat in the readme for when others come by trying to use musl. Will close after that. |
Hi, I'm having a really basic problem with getting
kube
working in cluster with my k3s cluster (v1.18.9-k3s1). I have a very simple application which is basically just the following:The output I'm getting is:
I'm hoping this is something obvious in my dependencies, but am suspicious that it's a K3S compatibility issue, since I tried using
rustls
originally and had to switch to native openssl because the k3s in-cluster api server address is an IP address rather than a hostname...The text was updated successfully, but these errors were encountered: