Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ingress is going on all the nodes #10434

Closed
simonfoilen opened this issue Jun 29, 2024 · 4 comments
Closed

Ingress is going on all the nodes #10434

simonfoilen opened this issue Jun 29, 2024 · 4 comments

Comments

@simonfoilen
Copy link

Environmental Info:
K3s Version:

k3s version v1.29.6+k3s1 (83ae095a)
go version go1.21.11

Node(s) CPU architecture, OS, and Version:

Cluster Configuration:

  • 1 server
  • 1 agent

In 2 different datacenters.

Describe the bug:
I am setting up k3s on 2 nodes (just for testing. Will add more later). I could deploy a test website on a specific node (using labels), but then, when creating an Ingress, it is installing it on all the nodes (the ingress has all the external addresses instead of only the one I want).

Steps To Reproduce:

Common variables:

export VOLUME_PATH=/kube/volumes
export K3S_MAIN=do1.k.foilen.com

My K3S server config:

cat > /etc/rancher/k3s/config.yaml << _EOF
data-dir: $VOLUME_PATH/k3s

tls-san:
  - "$K3S_MAIN"

node-external-ip: $(curl https://checkip.foilen.com)
flannel-backend: wireguard-native
flannel-external-ip: true

embedded-registry: true
cluster-init: true
secrets-encryption: true

node-label:
  - "cloud=digitalocean"
  - "datacenter=tor1"
  - "svccontroller.k3s.cattle.io/lbpool=tor1_1"
  - "svccontroller.k3s.cattle.io/enablelb=true"
_EOF

My K3S agent config:

cat > /etc/rancher/k3s/config.yaml << _EOF
data-dir: $VOLUME_PATH/k3s

node-external-ip: $(curl https://checkip.foilen.com)

node-label:
  - "cloud=digitalocean"
  - "datacenter=fra1"
  - "svccontroller.k3s.cattle.io/lbpool=fra1_1"
  - "svccontroller.k3s.cattle.io/enablelb=true"
_EOF

I tried this https://docs.k3s.io/networking/networking-services#controlling-servicelb-node-selection .

My nodes have different labels:

  • 1.tor1.do1.k.foilen.com
    • svccontroller.k3s.cattle.io/enablelb: true
    • svccontroller.k3s.cattle.io/lbpool: tor1_1
  • 1.fra1.do1.k.foilen.com
    • svccontroller.k3s.cattle.io/enablelb: true
    • svccontroller.k3s.cattle.io/lbpool: fra1_1

Then, I wasn't sure which label and on which Kind of resource to place it, so I applied both labels:

  • svccontroller.k3s.cattle.io/lbpool: tor1_1
  • lbpool: tor1_1

on resources:

  • Deployment, template (so the pods)
  • Service
  • Ingress

Expected behavior:

That checking the ingresses would show only 1 address service it like:

kubectl get ingress -o wide --all-namespaces

NAMESPACE   NAME                                  CLASS     HOSTS                 ADDRESS                          PORTS     AGE
kubetest    kubetest-foilen-org-service-ingress   traefik   kubetest.foilen.org   143.110.208.100   80, 443   62m

Actual behavior:

Checking the ingresses, the 2 nodes are still serving it:

kubectl get ingress -o wide --all-namespaces

NAMESPACE   NAME                                  CLASS     HOSTS                 ADDRESS                          PORTS     AGE
kubetest    kubetest-foilen-org-service-ingress   traefik   kubetest.foilen.org   143.110.208.100,46.101.106.102   80, 443   62m

Additional context / logs:
Here is my full test resources:

apiVersion: v1
kind: Namespace
metadata:
  name: kubetest

---

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: kubetest
  name: kubetest-foilen-org
spec:
  replicas: 2
  selector:
    matchLabels:
      app.kubernetes.io/name: kubetest-foilen-org-service
      app.kubernetes.io/part-of: kubetest
  template:
    metadata:
      labels:
        app.kubernetes.io/name: kubetest-foilen-org-service
        app.kubernetes.io/part-of: kubetest
        svccontroller.k3s.cattle.io/lbpool: tor1_1
        lbpool: tor1_1
    spec:
      nodeSelector:
        datacenter: fra1
      containers:
        - name: service
          image: nginx
          ports:
            - containerPort: 80
              name: web

---

apiVersion: v1
kind: Service
metadata:
  namespace: kubetest
  name: kubetest-foilen-org-service
  labels:
    svccontroller.k3s.cattle.io/lbpool: tor1_1
    lbpool: tor1_1
spec:
  selector:
    app.kubernetes.io/name: kubetest-foilen-org-service
    app.kubernetes.io/part-of: kubetest
  ports:
    - protocol: TCP
      port: 80
      targetPort: web

---

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubetest-foilen-org-service-ingress
  namespace: kubetest
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: web,websecure
    cert-manager.io/cluster-issuer: letsencrypt-digitalocean-dns-issuer
  labels:
    svccontroller.k3s.cattle.io/lbpool: tor1_1
    lbpool: tor1_1
spec:
  rules:
    - host: kubetest.foilen.org
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: kubetest-foilen-org-service
                port:
                  number: 80
  tls:
    - secretName: kubetest-foilen-org-service-cert
      hosts:
        - kubetest.foilen.org

@brandond
Copy link
Member

brandond commented Jun 29, 2024

The ingress is exposed via a loadbalancer. the default loadbalancer for k3s is servicelb, which is documented here: https://docs.k3s.io/networking/networking-services#service-load-balancer

The docs cover limiting which nodes the loadbalancer uses.

@github-project-automation github-project-automation bot moved this from New to Done Issue in K3s Development Jun 29, 2024
@simonfoilen
Copy link
Author

yes that is the documentation I said I was following.

I tried putting the labels everywhere without success (as I explained)

I have no clue what is missing right now. So, there is one problem:

  • documentation is imcomplete/unclear
  • documentation has a bug
  • or the implementation has a bug

Could you give me some pointers if that is just a documentation issue?

thanks

@simonfoilen
Copy link
Author

I downloaded K3S's code and it looks like the problem is that the k3s code is only selecting nodes for LoadBalancer
and if we check the "traefik" service that k3s installs, its LoadBalancer part doesn't have a svccontroller.k3s.cattle.io/lbpool (so it picks all nodes).

So, in other words, that cannot work.

It seems I would need to:

  • disable traefik
  • install traefik myself with
    • 2 services:
      • traefik-tor1_1 : with svccontroller.k3s.cattle.io/lbpool=tor1_1
      • traefik-fra1_1 : with svccontroller.k3s.cattle.io/lbpool=fra1_1
    • 2 Ingress classes
      • traefik-tor1_1
      • traefik-fra1_1
  • Then, I will be able to set on my Ingress the ingressClassName for the right load-balancer pool to use.

You might consider automatically doing that. For instance, when svccontroller.k3s.cattle.io/enablelb=true, you could split traefik like that.
What do you think?

@brandond
Copy link
Member

brandond commented Jun 30, 2024

if we check the "traefik" service that k3s installs, its LoadBalancer part doesn't have a svccontroller.k3s.cattle.io/lbpool

It is easy enough to do that for yourself, if that's something you want. Most people don't.

If you just wanted the ingress to only use a single node, you could just only label that node with svccontroller.k3s.cattle.io/enablelb=true and do nothing else; nodes that don't have that label will not be used for servicelb.

If you want more fine grained control, you could:

  1. Label the node you want the ingress to use with svccontroller.k3s.cattle.io/lbpool=traefik and svccontroller.k3s.cattle.io/enablelb=true
  2. Add a HelmChartConfig to add this same label to the traefik service, as described at https://docs.k3s.io/helm#customizing-packaged-components-with-helmchartconfig
    apiVersion: helm.cattle.io/v1
    kind: HelmChartConfig
    metadata:
      name: traefik
      namespace: kube-system
    spec:
      valuesContent: |-
        service:
          labels:
            svccontroller.k3s.cattle.io/lbpool: traefik

If you want to be able to use different ingress classes for different nodes then yes, that will take much more customization and likely multiple installations of the traefik ingress controller, each bound to its own service and lbpool.

@k3s-io k3s-io locked and limited conversation to collaborators Jun 30, 2024
@brandond brandond converted this issue into discussion #10437 Jun 30, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
Archived in project
Development

No branches or pull requests

2 participants