Skip to content

seizadi/flagger-linkerd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

flagger-linkerd

There are many service mesh implementations on k8s, we will focus on Linkerd here. Istio and Linkerd are the two leading solutions. This is a good article on the comparison of Istio and Linkerd. There is also AWS AppMesh, you can see my testing AppMesh with Flagger. Contour is another mesh solution, sponsored by VMware focused on north/south flows (e.g. Ingress/Load Balancer), here is FAQ on it.

Here is a performance comparision of Istio versus Linkerd that was done in Dec 2018 for KubeCon. Since there have been many release, maybe that analysis is old and probably stand to be redone with newer version of Istio.

This project is based on this WeaveWorks Tutorial.

Flagger Linkerd Traffic Split

The following Linkerd canary guide is a good description of the Linkerd operational elments and follow the same example.

References

Flagger Docs

Run demo

**** CAUTION THE STEPS BELOW WILL DELETE YOUR CURRENT MINIKUBE INSTALLATION ****

make flagger
make test

At this point you should have the demo running, and you can open the minikube address for the ingress to see the application.

curl http://$(minikube ip)
{
  "hostname": "podinfo-primary-69dbbfcf8f-wzvsz",
  "version": "3.1.0",
  "revision": "7b6f11780ab1ce8c7399da32ec6966215b8e43aa",
  "color": "#34577c",
  "logo": "https://eks.handson.flagger.dev/cuddle_clap.gif",
  "message": "greetings from podinfo v3.1.0",
  "goos": "linux",
  "goarch": "amd64",
  "runtime": "go1.13.1",
  "num_goroutine": "8",
  "num_cpu": "3"
}

There are some overlays in the kustomization.yaml, you can comment out:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
  - base
patchesStrategicMerge:
#  - overlays/podinfo.yaml
#  - overlays/canary.yaml

For example comment out the overlays/podinfo.yaml, and them run

make test

To apply the changes. You can see the canary progression

kubectl -n test get --watch canaries

See the traffic during the cycle:

watch curl http://$(minikube ip)

The rest of this writeup is more detail more notes that I wrote as I went through the varous stages and any issues I found.

Setup cluster

**** CAUTION THE STEPS BELOW WILL DELETE YOUR MINIKUBE INSTALLATION ****

If you are new to Linkerd you can follow their setup instructions.

The Linkerd guide has following manifest for Flagger and Test app that is a Buyonet image, buoyantio/slow_cooker:1.2.0, We will follow the Flagger setup but the basic operation is the same.

At the time of running my tests, the version that I am running:

kubectl version --short
Client Version: v1.18.0
Server Version: v1.18.3

The minikube k8s cluster with Linkerd and Flagger is created using:

make flagger

In another session you can run dashboard for Linkerd and Grafana dashbaords:

make dashboard

To get a high level view of what Flagger resources are being created:

> kubectl apply --dry-run=client -k github.com/weaveworks/flagger//kustomize/linkerd
customresourcedefinition.apiextensions.k8s.io/alertproviders.flagger.app created (dry run)
customresourcedefinition.apiextensions.k8s.io/canaries.flagger.app created (dry run)
customresourcedefinition.apiextensions.k8s.io/metrictemplates.flagger.app created (dry run)
serviceaccount/flagger created (dry run)
clusterrole.rbac.authorization.k8s.io/flagger created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/flagger created (dry run)
deployment.apps/flagger created (dry run)

To create a test application there is kustomize config to build it:

kubectl apply -k .

You can run it using:

make test

You can check the status using:

make status

Here are the detail steps to create test application from the Flagger guide ....

Create a test namespace and enable Linkerd proxy injection:

kubectl create ns test
kubectl annotate namespace test linkerd.io/inject=enabled

Install the load testing service to generate traffic during the canary analysis:

kubectl apply -k github.com/weaveworks/flagger//kustomize/tester

Create a deployment and a horizontal pod autoscaler:

kubectl apply -k github.com/weaveworks/flagger//kustomize/podinfo

Create a canary custom resource for the podinfo deployment:

apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
  name: podinfo
  namespace: test
spec:
  # deployment reference
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: podinfo
  # HPA reference (optional)
  autoscalerRef:
    apiVersion: autoscaling/v2beta1
    kind: HorizontalPodAutoscaler
    name: podinfo
  # the maximum time in seconds for the canary deployment
  # to make progress before it is rollback (default 600s)
  progressDeadlineSeconds: 60
  service:
    # ClusterIP port number
    port: 9898
    # container port number or name (optional)
    targetPort: 9898
  analysis:
    # schedule interval (default 60s)
    interval: 30s
    # max number of failed metric checks before rollback
    threshold: 5
    # max traffic percentage routed to canary
    # percentage (0-100)
    maxWeight: 50
    # canary increment step
    # percentage (0-100)
    stepWeight: 5
    # Linkerd Prometheus checks
    metrics:
    - name: request-success-rate
      # minimum req success rate (non 5xx responses)
      # percentage (0-100)
      thresholdRange:
        min: 99
      interval: 1m
    - name: request-duration
      # maximum req duration P99
      # milliseconds
      thresholdRange:
        max: 500
      interval: 30s
    # testing (optional)
    webhooks:
      - name: acceptance-test
        type: pre-rollout
        url: http://flagger-loadtester.test/
        timeout: 30s
        metadata:
          type: bash
          cmd: "curl -sd 'test' http://podinfo-canary.test:9898/token | grep token"
      - name: load-test
        type: rollout
        url: http://flagger-loadtester.test/
        metadata:
          cmd: "hey -z 2m -q 10 -c 2 http://podinfo-canary.test:9898/"

Save the above resource as base/test/podinfo/canary.yaml and then apply it:

kubectl apply -f ./base/test/podinfo/canary.yaml

The Flagger controller is watching these definitions and will create some new resources on your cluster. To watch as this happens, run:

kubectl -n test get ev --watch

A new deployment named podinfo-primary will be created with the same number of replicas that podinfo has. Once the new pods are ready, the original deployment is scaled down to zero. This provides a deployment that is managed by Flagger as an implementation detail and maintains your original configuration files and workflows.

When the canary analysis starts, Flagger will call the pre-rollout webhooks before routing traffic to the canary. The canary analysis will run for five minutes while validating the HTTP metrics and rollout hooks every half a minute.

After a couple of seconds Flagger will create the canary objects:

# applied
deployment.apps/podinfo
horizontalpodautoscaler.autoscaling/podinfo
ingresses.extensions/podinfo
canary.flagger.app/podinfo

# generated
deployment.apps/podinfo-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/podinfo
service/podinfo-canary
service/podinfo-primary
trafficsplits.split.smi-spec.io/podinfo

The traffic split CRD is interesting to look at:

k -n test get TrafficSplit -o yaml
apiVersion: v1
items:
- apiVersion: split.smi-spec.io/v1alpha1
  kind: TrafficSplit
  metadata:
    creationTimestamp: "2020-06-12T02:01:38Z"
    generation: 1
    managedFields:
    - apiVersion: split.smi-spec.io/v1alpha1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:ownerReferences: {}
        f:spec:
          .: {}
          f:backends: {}
          f:service: {}
      manager: flagger
      operation: Update
      time: "2020-06-12T02:01:38Z"
    name: podinfo
    namespace: test
    ownerReferences:
    - apiVersion: flagger.app/v1beta1
      blockOwnerDeletion: true
      controller: true
      kind: Canary
      name: podinfo
      uid: fbeb7e3d-f203-4548-be58-7f2f5ae7923d
    resourceVersion: "60735"
    selfLink: /apis/split.smi-spec.io/v1alpha1/namespaces/test/trafficsplits/podinfo
    uid: 9724d59a-fbf7-4913-93d5-71d1d585a057
  spec:
    backends:
    - service: podinfo-canary
      weight: "0"
    - service: podinfo-primary
      weight: "100"
    service: podinfo
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

You should be able to observe the backend reference as Flagger updates the canary and update traffic flow based on analysis:

  spec:
    backends:
    - service: podinfo-canary
      weight: "0"
    - service: podinfo-primary
      weight: "100"
    service: podinfo

Verify that everything has started up successfully by running:

kubectl -n test rollout status deploy podinfo-primary

After the boostrap, the podinfo deployment will be scaled to zero and the traffic to podinfo.test will be routed to the primary pods. During the canary analysis, the podinfo-canary.test address can be used to target directly the canary pods.

Note NOTE

Traffic splitting occurs on the client side of the connection and not the server side. Any requests coming from outside the mesh will not be split and will always be directed to the primary. A service of type LoadBalancer will exhibit this behavior as the source is not part of the mesh. To split external traffic, add your ingress controller to the mesh.

Which is what I did in this demo by creating an ingress for podinfo, more detail on this later in this guide.

Flux Installation

The two guides at this point test the canary, but decide to run Flux and test the canary promotion with that installed.

make flux

It should display the SSH key using:

fluxctl --k8s-fwd-ns flux identity
ssh-rsa ..................  

Copy the public key 'ssh-rsa ....' and create a deploy key with write access on your GitHub repository. Go to Settings > Deploy keys click on Add deploy key, check Allow write access, paste the Flux public key and click Add key.

Once that is done, Flux will pick up the changes in the repository and deploy them to the cluster. You can speed up the process by forcing a sync:

fluxctl sync --k8s-fwd-ns flux

Now you should have the test namespace and the canary deployment:

❯ k -n test get deploy
NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
flagger-loadtester   1/1     1            1           2m7s
podinfo              0/0     0            0           2m7s
podinfo-primary      2/2     2            2           119s

Now we can run through the standard Flagger canary test with this tester/podinfo setup, I will do one of them you can follow the rest from the Flagger AWS AppMesh demo, the behavior should be the same: You can watch the behavior like this:

kubectl -n test get --watch canaries
❯ kubectl -n test describe canary/podinfo
...
Events:
  Type     Reason  Age                  From     Message
  ----     ------  ----                 ----     -------
  Warning  Synced  25m                  flagger  podinfo-primary.test not ready: waiting for rollout to finish: observed deployment generation less then desired generation
  Normal   Synced  25m                  flagger  Initialization done! podinfo.test
  Normal   Synced  3m56s                flagger  New revision detected! Scaling up podinfo.test
  Warning  Synced  26s (x7 over 3m26s)  flagger  canary deployment podinfo.test not ready: waiting for rollout to finish: 1 of 2 updated replicas are available

FluxCloud

I found FluxCloud an intersting project for building event subsystem, should compare to ArgoEvents.

ArgoCD

I set this up using ArgoCD as well

  • Click "New application"
Field Value
Application name: seizadi-canary
Project: default
Sync policy: Manual
Repository: https://github.com/seizadi/flagger-linkerd
Revision: HEAD
Path: workloads
Cluster: https://kubernetes.default.svc
Namespace: test

The UI is smart enough to fill in some fields.

  • Click "Sync".
  • Click "Synchronize" in the Sliding panel.

Now you should have a green application and pod running.

Debug

Metrics-Server & HPA Problem

Found that we could not get metrics:

❯ kubectl -n test get events --watch
LAST SEEN   TYPE      REASON                    OBJECT                                    MESSAGE
20s         Warning   FailedGetResourceMetric   horizontalpodautoscaler/podinfo-primary   unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
0s          Warning   FailedComputeMetricsReplicas   horizontalpodautoscaler/podinfo-primary   invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)

I think the Metrics Server is missing from my installation but I don't see where in the guides they hightlight I suppose to install it:

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml

or for Minikube you can enable the metrics-server addon:

minikube addons enable metrics-server

Now I have a different problem with HPA:

> k -n test get events --watch
...
0s          Warning   FailedGetResourceMetric        horizontalpodautoscaler/podinfo-primary   unable to get metrics for resource cpu: no metrics returned from resource metrics API

> k -n test get hpa
NAME              REFERENCE                    TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
podinfo           Deployment/podinfo           <unknown>/99%   2         4         0          12m
podinfo-primary   Deployment/podinfo-primary   <unknown>/99%   2         4         2          11m

Followed the following issue about Metrics Server that got me to the root cause in Linkerd.

HPA requires resource requests to work. By default Linkerd doesn't add those to make sure everything works in constrained environments (such as minikube). Here is the Linkerd documentation on proxy config. To fix this add the following annotations to the deployment:

spec:
  template:
    metadata:
      annotations:
        config.linkerd.io/proxy-cpu-limit: "1.5"
        config.linkerd.io/proxy-cpu-request: "0.2"
        config.linkerd.io/proxy-memory-limit: 2Gi
        config.linkerd.io/proxy-memory-request: 128Mi

Now it is working....

❯ k get hpa                                                                                  2.6.3 ⎈ minikube/test
NAME              REFERENCE                    TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
podinfo           Deployment/podinfo           <unknown>/99%   2         4         0          11m
podinfo-primary   Deployment/podinfo-primary   1%/99%          2         4         2          9m59s

The on podinfo is normal, it has no resources since Flagger scales that down and moves all resources to podinfo-primary, see the replica count is zero.

Flux duplicate resource problem

I enabled Flux for my project and ran into this problem:

ts=2020-06-13T07:35:55.204905682Z caller=loop.go:107 component=sync-loop err="loading resources from repo: duplicate definition of '<cluster>:kustomization/' (in base/kustomization.yaml and kustomization.yaml)"
ts=2020-06-13T07:35:55.206125209Z caller=loop.go:133 component=sync-loop event=refreshed url=ssh://[email protected]/seizadi/flagger-linkerd branch=master HEAD=0d7bc2165dff0f261751d290b83501308dbb2de3

This is a known Flux issue, there are a lot of mixed concerns, like should Kustomize kind be treated like a k8s resource, which is how they are treated by Fluxd, even though in my case they are at different heiarchy and they do very different things. The other problem is that I didn't specify a git_path thinking it would default to the top level and pick the .flux.yaml and do the right thing, e.g. 'command: kustomize build'. This is what my tree looks like with regular files removed:

   ├── .flux.yaml
   ├── base
   │   ├── kustomization.yaml
   │   └── test
   │       ├── namespace.yaml
   │       ├── podinfo
   │       │   ├── canary.yaml
   │       │   ├── deployment.yaml
   │       │   └── hpa.yaml
   │       └── tester
   │           ├── deployment.yaml
   │           └── service.yaml
   └── kustomization.yaml

One test was to move kustomization.yaml lower in heiarchy and see if it found .flux.yaml first it would do the right thing?

├── .flux.yaml
└── workloads
    ├── base
    │   ├── kustomization.yaml
    │   └── test
    │       ├── namespace.yaml
    │       ├── podinfo
    │       │   ├── canary.yaml
    │       │   ├── deployment.yaml
    │       │   └── hpa.yaml
    │       └── tester
    │           ├── deployment.yaml
    │           └── service.yaml
    └── kustomization.yaml

Looks like now it picks up the .flux.yaml and is happy, but nothing happens!

ts=2020-06-13T13:42:15.334315796Z caller=loop.go:141 component=sync-loop jobID=04497018-87aa-01a5-bd3d-66e86b2a9033 state=in-progress
ts=2020-06-13T13:42:16.029634555Z caller=loop.go:153 component=sync-loop jobID=04497018-87aa-01a5-bd3d-66e86b2a9033 state=done success=true
ts=2020-06-13T13:42:16.741161086Z caller=loop.go:133 component=sync-loop event=refreshed url=ssh://[email protected]/seizadi/flagger-linkerd branch=master HEAD=ad66399c28b11ecd5806911888fef59998766549

Looks like I am missing a option "--manifest-generation=true" for .flux.yaml to work see .flux.yaml config guide.

Added that option and now it pulls the deployment and creates the test namespace:

❯ k get namespaces                                                                                   2.6.3 ⎈ minikube
NAME              STATUS   AGE
...
test              Active   69s

The canary is working:

❯ k -n test get deploy
NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
flagger-loadtester   1/1     1            1           2m7s
podinfo              0/0     0            0           2m7s
podinfo-primary      2/2     2            2           119s

Minikube Out of Resources

Tried to run the canary demo and found that I could not

Debuged this to our of CPU resource problem, good resource for debugging k8s resource issues, interesting I hit this before the Memory limits:

❯ k describe canary/podinfo
....
Events:
  Type     Reason  Age                From     Message
  ----     ------  ----               ----     -------
  Warning  Synced  47m                flagger  podinfo-primary.test not ready: waiting for rollout to finish: observed deployment generation less then desired generation
  Normal   Synced  47m                flagger  Initialization done! podinfo.test
  Normal   Synced  25m                flagger  New revision detected! Scaling up podinfo.test
  Warning  Synced  8s (x51 over 25m)  flagger  canary deployment podinfo.test not ready: waiting for rollout to finish: 1 of 2 updated replicas are available

❯ k get pods
NAME                                 READY   STATUS    RESTARTS   AGE
flagger-loadtester-bd6b9c69f-kw2wv   2/2     Running   0          43m
podinfo-df4c95d5d-kv66m              0/2     Pending   0          21m
podinfo-df4c95d5d-pndk6              2/2     Running   0          21m
podinfo-primary-58bf47f9dd-hq744     2/2     Running   0          43m
podinfo-primary-58bf47f9dd-kbxvd     2/2     Running   0          42m
❯ k describe pod podinfo-df4c95d5d-kv66m
Name:           podinfo-df4c95d5d-kv66m
Namespace:      test
....
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  53s (x16 over 21m)  default-scheduler  0/1 nodes are available: 1 Insufficient cpu.

Minikube defaults on my system is 2CPUs and 4GB, so I will bump it to 3CPUs which will also increase memory to 6GB.

Don't see the traffic split working

I decided to add an Ingress and test if traffic distribution is working properly, found that although it looked like the control plane was working the data plane did not show the traffic mix you expect to the browser.

Here is the guide on setting up Ingress with Linkerd. I have been meaning to test Contour, doesn't look like it works well with Linkerd.

The ingress is setup to point to the service/podinfo, note special annotations for Linkerd to route traffic properly:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  labels:
    app: podinfo
  name: podinfo
  namespace: test
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
      grpc_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
spec:
  rules:
    - host: minikube
      http:
        paths:
          - path: /
            backend:
              serviceName: podinfo
              servicePort: 9898

Here is the view of the control plane going through, note both the Flagger Canary and Linkerd TrafficSplit CRDs show the proper traffic mix, in two windows you see the following running:

kubectl -n test get --watch canaries
watch kubectl -n test describe ts podinfo

traffic

When I look at the browser I only see traffic that would be coming from the primary. while both the primary/canary mix on both windows go through their progression, only when promotion happens and traffic is shifted from canary to primary I see the image change.

Look at this post for similar ingress problem with traffic split

Setup Kind cluster with NGINX Ingress

In the dialog to fix this, sounds like this worked on Kind, so I setup a similar setup using kind:

cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
EOF

Then follow instructions for setting up ingress using NGINX on kind

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
kubectl wait --namespace ingress-nginx \
  --for=condition=ready pod \
  --selector=app.kubernetes.io/component=controller \
  --timeout=90s

Debug Linkerd

I wrote a new manifest to test this without Flagger. It sets up two deployments and two services and a traffic split CRD that splits the traffic between them, with Ingress that looks at the root service from traffic splitter:

k apply -k echo-test

I found that this did not work with either minikube or kind clusters Ingress, I setup Linkerd this version:

❯ linkerd version
Client version: stable-2.8.0
Server version: stable-2.8.0

These commands:

linkerd check --pre                     # validate that Linkerd can be installed
linkerd install | kubectl apply -f -    # install the control plane into the 'linkerd' namespace
linkerd check 

The test is installed in the echo namespace:

❯ kn echo
Context "kind-kind" modified.
❯ k get deploy
NAME            READY   UP-TO-DATE   AVAILABLE   AGE
echo-can-app    1/1     1            1           59s
echo-prim-app   1/1     1            1           59s
❯ k get svc
NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
echo-can-service    ClusterIP   10.100.114.238   <none>        5678/TCP   65s
echo-prim-service   ClusterIP   10.99.94.253     <none>        5678/TCP   65s
❯ k get ts -o yaml
apiVersion: v1
items:
- apiVersion: split.smi-spec.io/v1alpha1
  kind: TrafficSplit
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"split.smi-spec.io/v1alpha1","kind":"TrafficSplit","metadata":{"annotations":{},"name":"service-split","namespace":"echo"},"spec":{"backends":[{"service":"echo-prim-service","weight":"500m"},{"service":"echo-can-service","weight":"500m"}],"service":"echo-prim-service"}}
    creationTimestamp: "2020-07-20T02:36:15Z"
    generation: 1
    managedFields:
    - apiVersion: split.smi-spec.io/v1alpha1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            .: {}
            f:kubectl.kubernetes.io/last-applied-configuration: {}
        f:spec:
          .: {}
          f:backends: {}
          f:service: {}
      manager: kubectl
      operation: Update
      time: "2020-07-20T02:36:15Z"
    name: service-split
    namespace: echo
    resourceVersion: "6684"
    selfLink: /apis/split.smi-spec.io/v1alpha1/namespaces/echo/trafficsplits/service-split
    uid: 8603a643-1e6e-4aea-a307-fd9c49bbe9e8
  spec:
    backends:
    - service: echo-prim-service
      weight: 500m
    - service: echo-can-service
      weight: 500m
    service: echo-prim-service
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

It looks like the traffic split is working on Kind cluster:

❯ kn echo
Context "kind-kind" modified.
❯ kubectl run -it --rm --image=infoblox/dnstools api-test
If you don't see a command prompt, try pressing enter.
dnstools# while true; do curl http://echo-prim-service.echo:5678; sleep 1; done
primary
canary
canary
primary
primary
primary
primary
canary
canary
...

Same on minikube:

dnstools# while true; do curl http://echo-prim-service.echo:5678; sleep 1; done
canary
canary
primary
canary
primary
primary
canary
primary
primary
canary
primary
canary
canary
canary
canary
canary
...

Minikube Ingress does not work...

while true; do curl minikube/echo; sleep 1; done
primary
primary
primary
primary
primary
primary
primary

It doesn't work on Kind either:

while true; do curl localhost/echo; sleep 1; done
primary
primary
primary
primary
primary
primary
primary

I had similar behavior when I used a contianer not in the same namespace:

❯ kn default
Context "minikube" modified.
❯ kubectl run -it --rm --image=infoblox/dnstools api-test
If you don't see a command prompt, try pressing enter.
dnstools# while true; do curl http://echo-prim-service.echo:5678; sleep 1; done
primary
primary
primary
primary
primary
...
Fix for Linkerd / Ingress SMI problem

If I injected the nginx controller so that it is part of the mesh:

kubectl -n kube-system get deploy ingress-nginx-controller -o yaml | \
   linkerd inject - | \
   kubectl apply -f -

Now ingress is working, so if you want ingress to work you have to include the Ingress Controller as part of the mesh. I validated with Linkerd team that this is how it is suppose to work.

while true; do curl minikube/echo; sleep 1; done
primary
primary
canary
primary
canary

About

Flagger integration with Linkerd

Resources

Stars

Watchers

Forks

Packages

No packages published