Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Kubernetes Sidecar Networking Documentation. #7

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

MicahBird
Copy link

I love this project and hope this documentation helps! Please let me know if anything needs to be tweaked/adjusted :)

Copy link
Owner

@qdm12 qdm12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, thanks for the contribution 👍
Just a few comments to address 😉

Comment on lines +40 to +56
- name: TZ
value: "Europe/London"

- name: VPN_SERVICE_PROVIDER
value: "ivpn"

- name: OPENVPN_USER
value: ""

- name: OPENVPN_PASSWORD
value: ""

# If you having connection issues, try enabling these variables to help diagnose it.
# - name: FIREWALL_DEBUG
# value: "on"
# - name: FIREWALL_INPUT_PORTS
# value: "8080"
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you set these to the same order/values + comments as the readme.md of gluetun:

      # See https://github.com/qdm12/gluetun-wiki/tree/main/setup#setup
      - VPN_SERVICE_PROVIDER=ivpn
      - VPN_TYPE=openvpn
      # OpenVPN:
      - OPENVPN_USER=
      - OPENVPN_PASSWORD=
      # Wireguard:
      # - WIREGUARD_PRIVATE_KEY=wOEI9rqqbDwnN8/Bpp22sVz48T71vJ4fYmFWujulwUU=
      # - WIREGUARD_ADDRESSES=10.64.222.21/32
      # Timezone for accurate log times
      - TZ=
      # Server list updater
      # See https://github.com/qdm12/gluetun-wiki/blob/main/setup/servers.md#update-the-vpn-servers-list
      - UPDATER_PERIOD=

# - containerPort: 8080

# Adapt the following environment variables to suit your needs and VPN provider
env:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't you need DNS_KEEP_NAMESERVER=on for the gluetun sidecar to work?

There was a rather big fuss about it in qdm12/gluetun#1523 🤔

I personally don't use gluetun with Kubernetes so I can't really tell 😢

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DNS_KEEP_NAMESERVER=on is absolutely needed if the container is expected to be able to resolve or communicate to any cluster services, without it cluster services cannot be resolved consistently or at all in many setups.

Comment on lines +63 to +70
# -- Connecting Other Containers --
# Define other containers that you want to connected to a VPN.
# When using Gluetun in a sidecar configuration, all other containers will use Gluetun's VPN connection.
# For testing purposes, you can `kubectl exec` into this curl container and run `curl https://ipinfo.io` to test your connection!

- name: curl-container
image: quay.io/curl/curl:latest
command: ["sleep", "infinity"]
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing this a bit:

Suggested change
# -- Connecting Other Containers --
# Define other containers that you want to connected to a VPN.
# When using Gluetun in a sidecar configuration, all other containers will use Gluetun's VPN connection.
# For testing purposes, you can `kubectl exec` into this curl container and run `curl https://ipinfo.io` to test your connection!
- name: curl-container
image: quay.io/curl/curl:latest
command: ["sleep", "infinity"]
# -- Connecting Other Containers --
# Define other containers you want connected to the VPN through Gluetun.
# An example of container is commented below.
# - name: gluetun-alpine
# image: alpine
# command: ["sleep", "infinity"]

Copy link

@husmen husmen Mar 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it will help to document how to actually test the connection from alpine's shell:
echo "wget -qO- https://ipinfo.io" | kubectl exec -i -t <pod-name> --container gluetun-alpine -- /bin/ash

app: gluetun
ports:
- name: shadowsocks-02
protocol: UDP
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This mismatches the TCP above, I believe you want:

Suggested change
protocol: UDP
protocol: TCP

right? Although shadowsocks works over both tcp or udp (not both simultaneously, it's 2 distinct servers under the hood)

@shepherdjerred
Copy link

I would love to see this merged in! I followed these instructions and was able to easily setup gluetun on k8s.

@S0PEX
Copy link

S0PEX commented Jan 4, 2024

I would love to see this merged in! I followed these instructions and was able to easily setup gluetun on k8s.

Hey, do you mind sharing how you configured the other containers to use the sidecar?

@shepherdjerred
Copy link

I would love to see this merged in! I followed these instructions and was able to easily setup gluetun on k8s.

Hey, do you mind sharing how you configured the other containers to use the sidecar?

Sure! Here's a Deployment where I put an application and gluetun onto the same Pod: https://github.com/shepherdjerred/servers/blob/main/cdk8s/dist/turing.k8s.yaml#L1862-L1967

With cdk8s: https://github.com/shepherdjerred/servers/blob/main/cdk8s/src/services/torrents/qbittorrent.ts#L39-L86

@S0PEX
Copy link

S0PEX commented Jan 5, 2024

I would love to see this merged in! I followed these instructions and was able to easily setup gluetun on k8s.

Hey, do you mind sharing how you configured the other containers to use the sidecar?

Sure! Here's a Deployment where I put an application and gluetun onto the same Pod: https://github.com/shepherdjerred/servers/blob/main/cdk8s/dist/turing.k8s.yaml#L1862-L1967

With cdk8s: https://github.com/shepherdjerred/servers/blob/main/cdk8s/src/services/torrents/qbittorrent.ts#L39-L86

Thanks for getting back to me so quickly. The solution worked for me too. However, I'm thinking that with this setup, I won't be able to share the gluetun container with networks that aren't in the same pod, right? I'm planning to check if there's a good way to deploy gluetun separately and then set up other pods to use it as an egress network using labels.

@shepherdjerred
Copy link

I'm not super experienced with Kubernetes, but that sounds correct. You could deploy one gluetun sidecar container per pod that needs the VPN, but maybe there's a better way.

@gjrtimmer
Copy link

gjrtimmer commented Jan 20, 2024

@S0PEX @shepherdjerred I'm currently researching what you guys are looking for. As far as I understand it. It should be possible to run the gluetun separately. Currently trying to figure it for for a nomad deployment. For both nomad and kubeneters it should be the same because in order to run it separately it must be using a CNI Network macvlan as I understand it now. I don't have it working yet. But at least I have figured out that people are using the CNI macvlan driver for this and are creating a separate network for their vpn. Hope this helps. Because both nomad and kubernetes can use CNI plugins it should work for both.

Here is the clue I'm working with: a lot of home labbers are using the macvlan CNI to create a special vpn network in their clusters, both for kubernetes and nomad. And use it to redirect their traffic through tailscale. I'm currently thinking the same principle should work for gluetun.

If you check blogs and repositories on github you see people are using the macvlan cni driver to create special cluster wide network to route all traffic through tailscale vpn.

Hope this helps, please ping me if you figure it out. I will do the same.

@S0PEX
Copy link

S0PEX commented Jan 26, 2024

@gjrtimmer Thanks for the hint, I'll check out macvlan and see if I can get it working.

Comment on lines +27 to +28
containers:
- name: gluetun
Copy link

@Kab1r Kab1r Apr 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The better way to do this in newer versions of Kubernetes is to use native sidecar containers with a readiness probe. This can ensure that the gluetun sidecar starts and is healthy before the container being proxied is started.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this a beta feature in v1.29 and not GA yet? It seems that it's still behind a feature gate, even in v1.30.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is in beta, but the feature gate is on by default since v1.29

@v1nsai
Copy link

v1nsai commented May 20, 2024

I was able to access the UI using kubectl port-forward, but LoadBalancer service never worked for me, and I know its not user error on my part as I was able to access the other container's UI just fine when I got rid of gluetun.

Here's the manifest I applied, if anyone sees why this won't work through the LoadBalancer I'd love to hear it, I hate giving up but screw it it works with port forwarding.

@banana-soldier
Copy link

Thank you this was really helpful. I was able to use Gluten with browserless/chromium that another container uses Puppeteer to connect and run some routines.

@DreamingRaven
Copy link

DreamingRaven commented Jun 25, 2024

Thanks for this pull, this helped me get everything together and working, albeit slightly differently.

For anyone stumbling upon this to integrate with applications like qbittorrent, I have created a helm chart that creates an init-container based side-car out of gluetun, to enable binding to the tunnel interface in the same pod.

https://gitlab.com/GeorgeRaven/raven-helm-charts/-/tree/main/charts/qbittorrent?ref_type=heads

or using the gitlab package registry:

helm repo add raven https://gitlab.com/api/v4/projects/55284972/packages/helm/stable

The optional init container boils down to this: https://gitlab.com/GeorgeRaven/raven-helm-charts/-/blob/main/charts/qbittorrent/values.yaml?ref_type=heads#L28-L61

  initContainers:
  # optional gluetun VPN client sidecar
  # https://github.com/qdm12/gluetun
  # https://github.com/qdm12/gluetun-wiki/pull/7
  - name: gluetun # init sidecar for VPN connection
    image: "ghcr.io/qdm12/gluetun:latest" # <- you probably want this to be a set version
    restartPolicy: Always # makes this init into a sidecar container k8s 1.29
    imagePullPolicy: Always
    ports:
    - name: http-proxy
      containerPort: 8888
      protocol: TCP
    - name: tcp-shadowsocks
      containerPort: 8388
      protocol: TCP
    - name: udp-shadowsocks
      containerPort: 8388
      protocol: UDP
    envFrom:
    - secretRef:
        name: gluetun
        optional: false
    env:
    - name: TZ
      value: "Europe/London"
    - name: FIREWALL_DEBUG
      value: "on"
    - name: FIREWALL_INPUT_PORTS
      value: "8080" # <- the port for qbittorrent container otherwise blocked by gluetun firewall in same pod
    securityContext:
      capabilities:
        add:
        - NET_ADMIN

This will specifically enable a firewall rule to forward normal web traffic to qbittorrent server in the standard ingress > svc > pod manner of k8s, otherwise the firewall blocks the normal traffic like you trying to access qbittorrent (and fails liveness probes etc). This also uses envFrom which allows one secret to populate lots of environment variables, which is useful if you encrypt your secrets with something like bitnami sealed-secrets as I do.

Hope this helps the next person looking to do this.

@qdm12 qdm12 force-pushed the main branch 3 times, most recently from 7894b72 to 440e806 Compare July 30, 2024 06:51
@holysoles
Copy link
Contributor

holysoles commented Aug 9, 2024

Sharing some example config that ended up working for me for those that would find it useful, unsure why this MR isnt merged yet but glad it was here for reference!

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myapp
  name: myapp
spec:
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - image: ghcr.io/qdm12/gluetun:latest
          name: gluetun
          imagePullPolicy: Always
          securityContext:
            capabilities:
              add: ["NET_ADMIN"]
          ports:
            - containerPort: 9091
          env:
            - name: TZ
              value: '<timezone val>'
            - name: VPN_SERVICE_PROVIDER
              value: "<my provider>"
            - name: VPN_TYPE
              value: "wireguard"
            - name: WIREGUARD_PRIVATE_KEY
              value: "<priv key val>"
            - name: WIREGUARD_ADDRESSES
              value: "<IP val>"
            - name: FIREWALL_INPUT_PORTS
              value: "9091"
        - image: myapp:latest
          name: myapp

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: myapp
  name: myapp
spec:
  selector:
    app: myapp
  type: NodePort
  ports:
    - name: webserver
      port: 9091
      targetPort: 9091
      protocol: TCP
  externalIPs:
    - 192.168.1.99

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.