-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Kubernetes Sidecar Networking Documentation. #7
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, thanks for the contribution 👍
Just a few comments to address 😉
- name: TZ | ||
value: "Europe/London" | ||
|
||
- name: VPN_SERVICE_PROVIDER | ||
value: "ivpn" | ||
|
||
- name: OPENVPN_USER | ||
value: "" | ||
|
||
- name: OPENVPN_PASSWORD | ||
value: "" | ||
|
||
# If you having connection issues, try enabling these variables to help diagnose it. | ||
# - name: FIREWALL_DEBUG | ||
# value: "on" | ||
# - name: FIREWALL_INPUT_PORTS | ||
# value: "8080" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you set these to the same order/values + comments as the readme.md of gluetun:
# See https://github.com/qdm12/gluetun-wiki/tree/main/setup#setup
- VPN_SERVICE_PROVIDER=ivpn
- VPN_TYPE=openvpn
# OpenVPN:
- OPENVPN_USER=
- OPENVPN_PASSWORD=
# Wireguard:
# - WIREGUARD_PRIVATE_KEY=wOEI9rqqbDwnN8/Bpp22sVz48T71vJ4fYmFWujulwUU=
# - WIREGUARD_ADDRESSES=10.64.222.21/32
# Timezone for accurate log times
- TZ=
# Server list updater
# See https://github.com/qdm12/gluetun-wiki/blob/main/setup/servers.md#update-the-vpn-servers-list
- UPDATER_PERIOD=
# - containerPort: 8080 | ||
|
||
# Adapt the following environment variables to suit your needs and VPN provider | ||
env: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't you need DNS_KEEP_NAMESERVER=on
for the gluetun sidecar to work?
There was a rather big fuss about it in qdm12/gluetun#1523 🤔
I personally don't use gluetun with Kubernetes so I can't really tell 😢
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DNS_KEEP_NAMESERVER=on
is absolutely needed if the container is expected to be able to resolve or communicate to any cluster services, without it cluster services cannot be resolved consistently or at all in many setups.
# -- Connecting Other Containers -- | ||
# Define other containers that you want to connected to a VPN. | ||
# When using Gluetun in a sidecar configuration, all other containers will use Gluetun's VPN connection. | ||
# For testing purposes, you can `kubectl exec` into this curl container and run `curl https://ipinfo.io` to test your connection! | ||
|
||
- name: curl-container | ||
image: quay.io/curl/curl:latest | ||
command: ["sleep", "infinity"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changing this a bit:
- Testing it already documented in https://github.com/qdm12/gluetun-wiki/blob/main/setup/test-your-setup.md and I think K8s+Gluetun users would be aware of this. You could however add after this yml the equivalent
kubectl exec gluetun ...
to test the vpn connection. - Leaving a simplified
alpine
container as a commented example, it's better to not have an unneeded curl container the user forgets to comment out.
# -- Connecting Other Containers -- | |
# Define other containers that you want to connected to a VPN. | |
# When using Gluetun in a sidecar configuration, all other containers will use Gluetun's VPN connection. | |
# For testing purposes, you can `kubectl exec` into this curl container and run `curl https://ipinfo.io` to test your connection! | |
- name: curl-container | |
image: quay.io/curl/curl:latest | |
command: ["sleep", "infinity"] | |
# -- Connecting Other Containers -- | |
# Define other containers you want connected to the VPN through Gluetun. | |
# An example of container is commented below. | |
# - name: gluetun-alpine | |
# image: alpine | |
# command: ["sleep", "infinity"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it will help to document how to actually test the connection from alpine's shell:
echo "wget -qO- https://ipinfo.io" | kubectl exec -i -t <pod-name> --container gluetun-alpine -- /bin/ash
app: gluetun | ||
ports: | ||
- name: shadowsocks-02 | ||
protocol: UDP |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This mismatches the TCP above, I believe you want:
protocol: UDP | |
protocol: TCP |
right? Although shadowsocks works over both tcp or udp (not both simultaneously, it's 2 distinct servers under the hood)
I would love to see this merged in! I followed these instructions and was able to easily setup gluetun on k8s. |
Hey, do you mind sharing how you configured the other containers to use the sidecar? |
Sure! Here's a Deployment where I put an application and gluetun onto the same Pod: https://github.com/shepherdjerred/servers/blob/main/cdk8s/dist/turing.k8s.yaml#L1862-L1967 |
Thanks for getting back to me so quickly. The solution worked for me too. However, I'm thinking that with this setup, I won't be able to share the gluetun container with networks that aren't in the same pod, right? I'm planning to check if there's a good way to deploy gluetun separately and then set up other pods to use it as an egress network using labels. |
I'm not super experienced with Kubernetes, but that sounds correct. You could deploy one gluetun sidecar container per pod that needs the VPN, but maybe there's a better way. |
@S0PEX @shepherdjerred I'm currently researching what you guys are looking for. As far as I understand it. It should be possible to run the gluetun separately. Currently trying to figure it for for a nomad deployment. For both nomad and kubeneters it should be the same because in order to run it separately it must be using a CNI Network Here is the clue I'm working with: a lot of home labbers are using the macvlan CNI to create a special vpn network in their clusters, both for kubernetes and nomad. And use it to redirect their traffic through tailscale. I'm currently thinking the same principle should work for gluetun. If you check blogs and repositories on github you see people are using the macvlan cni driver to create special cluster wide network to route all traffic through tailscale vpn. Hope this helps, please ping me if you figure it out. I will do the same. |
@gjrtimmer Thanks for the hint, I'll check out |
containers: | ||
- name: gluetun |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The better way to do this in newer versions of Kubernetes is to use native sidecar containers with a readiness probe. This can ensure that the gluetun sidecar starts and is healthy before the container being proxied is started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this a beta feature in v1.29 and not GA yet? It seems that it's still behind a feature gate, even in v1.30.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is in beta, but the feature gate is on by default since v1.29
I was able to access the UI using Here's the manifest I applied, if anyone sees why this won't work through the LoadBalancer I'd love to hear it, I hate giving up but screw it it works with port forwarding. |
Thank you this was really helpful. I was able to use Gluten with browserless/chromium that another container uses Puppeteer to connect and run some routines. |
Thanks for this pull, this helped me get everything together and working, albeit slightly differently. For anyone stumbling upon this to integrate with applications like qbittorrent, I have created a helm chart that creates an init-container based side-car out of gluetun, to enable binding to the tunnel interface in the same pod. https://gitlab.com/GeorgeRaven/raven-helm-charts/-/tree/main/charts/qbittorrent?ref_type=heads or using the gitlab package registry:
The optional init container boils down to this: https://gitlab.com/GeorgeRaven/raven-helm-charts/-/blob/main/charts/qbittorrent/values.yaml?ref_type=heads#L28-L61
This will specifically enable a firewall rule to forward normal web traffic to qbittorrent server in the standard ingress > svc > pod manner of k8s, otherwise the firewall blocks the normal traffic like you trying to access qbittorrent (and fails liveness probes etc). This also uses envFrom which allows one secret to populate lots of environment variables, which is useful if you encrypt your secrets with something like bitnami sealed-secrets as I do. Hope this helps the next person looking to do this. |
7894b72
to
440e806
Compare
Sharing some example config that ended up working for me for those that would find it useful, unsure why this MR isnt merged yet but glad it was here for reference! ---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp
name: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: ghcr.io/qdm12/gluetun:latest
name: gluetun
imagePullPolicy: Always
securityContext:
capabilities:
add: ["NET_ADMIN"]
ports:
- containerPort: 9091
env:
- name: TZ
value: '<timezone val>'
- name: VPN_SERVICE_PROVIDER
value: "<my provider>"
- name: VPN_TYPE
value: "wireguard"
- name: WIREGUARD_PRIVATE_KEY
value: "<priv key val>"
- name: WIREGUARD_ADDRESSES
value: "<IP val>"
- name: FIREWALL_INPUT_PORTS
value: "9091"
- image: myapp:latest
name: myapp
---
apiVersion: v1
kind: Service
metadata:
labels:
app: myapp
name: myapp
spec:
selector:
app: myapp
type: NodePort
ports:
- name: webserver
port: 9091
targetPort: 9091
protocol: TCP
externalIPs:
- 192.168.1.99 |
I love this project and hope this documentation helps! Please let me know if anything needs to be tweaked/adjusted :)