Skip to content
This repository has been archived by the owner on Jun 20, 2024. It is now read-only.

weave-npc: error: assignment to entry in nil map #3407

Closed
kirikaza opened this issue Sep 21, 2018 · 4 comments
Closed

weave-npc: error: assignment to entry in nil map #3407

kirikaza opened this issue Sep 21, 2018 · 4 comments

Comments

@kirikaza
Copy link

BUG REPORT

What you expected to happen?

weave-npc logs shouldn't contain errors

What happened?

ERROR: logging before flag.Parse: E0918 14:07:47.578681    3471 runtime.go:66] Observed a panic: "assignment to entry in nil map" (assignment to entry in nil map)
/go/src/github.com/weaveworks/weave/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
/go/src/github.com/weaveworks/weave/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/go/src/github.com/weaveworks/weave/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/asm_amd64.s:573
/usr/local/go/src/runtime/panic.go:502
/usr/local/go/src/runtime/hashmap.go:507
/go/src/github.com/weaveworks/weave/npc/selector.go:157
/go/src/github.com/weaveworks/weave/npc/namespace.go:396
/go/src/github.com/weaveworks/weave/npc/controller.go:161
/go/src/github.com/weaveworks/weave/npc/controller.go:78
/go/src/github.com/weaveworks/weave/npc/controller.go:160
/go/src/github.com/weaveworks/weave/prog/weave-npc/main.go:306
/go/src/github.com/weaveworks/weave/vendor/k8s.io/client-go/tools/cache/controller.go:209
/go/src/github.com/weaveworks/weave/vendor/k8s.io/client-go/tools/cache/controller.go:320
/go/src/github.com/weaveworks/weave/vendor/k8s.io/client-go/tools/cache/delta_fifo.go:451
/go/src/github.com/weaveworks/weave/vendor/k8s.io/client-go/tools/cache/delta_fifo.go:451
/go/src/github.com/weaveworks/weave/vendor/k8s.io/client-go/tools/cache/controller.go:150
/go/src/github.com/weaveworks/weave/vendor/k8s.io/client-go/tools/cache/controller.go:124
/go/src/github.com/weaveworks/weave/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/go/src/github.com/weaveworks/weave/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/go/src/github.com/weaveworks/weave/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/go/src/github.com/weaveworks/weave/vendor/k8s.io/client-go/tools/cache/controller.go:124
/usr/local/go/src/runtime/asm_amd64.s:2361

How to reproduce it?

I couldn't reproduce it again. First time I played with network policies, creating and deleting them. These policies were simple: "deny all ingress", "deny all egress", "allow ingress to A from B", "allow egress from B to A".

Anything else we need to know?

Versions:

$ kubectl exec -n kube-system weave-net-… -c weave -- /home/weave/weave --local version
weave 2.4.1
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-08T16:31:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
$ minikube version
minikube version: v0.28.2

minikube cluster has been created according to this note.

Logs:

$ kubectl logs -n kube-system weave-net-… weave-npc — see the gist.

@murali-reddy
Copy link
Contributor

thanks @kirikaza for reporting the issue.

Looks like inner map in selector set is not getting initialised in some cases.

targetSelectorsCount: make(map[string]map[policyType]int)}

Could you please share the network polices you have used? In the policy spec was policyTypes specified?

@kirikaza
Copy link
Author

deny-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-ingress
spec:
  podSelector: {}
  policyTypes:
  - Ingress

deny-egress.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-egress
spec:
  podSelector: {}
  policyTypes:
  - Egress

nginx-from-redis.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: nginx-from-redis
spec:
  podSelector:
    matchLabels:
      app: nginx
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: redis

redis-to-nginx.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: redis-to-nginx
spec:
  podSelector:
    matchLabels:
      app: redis
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: nginx

@brb
Copy link
Contributor

brb commented Sep 21, 2018

@kirikaza Thanks for the bug report.

The crash happens because of the invalid policyTypes being reported in the DeleteNetworkPolicy event - it supposed to be "Egress" instead of "Ingress". Thus, the invalid target selector was chosen causing the panic.

INFO: 2018/09/18 14:07:47.556091 EVENT DeleteNetworkPolicy {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"networking.k8s.io/v1\",\"kind\":\"NetworkPolicy\",\"metadata\":{\"annotations\":{},\"name\":\"redis-to-nginx\",\"namespace\":\"default\"},\"spec\":{\"egress\":[{\"to\":[{\"podSelector\":{\"matchLabels\":{\"app\":\"nginx\"}}}]}],\"podSelector\":{\"matchLabels\":{\"app\":\"redis\"}}}}\n"},"creationTimestamp":"2018-09-18T12:58:13Z","generation":2,"name":"redis-to-nginx","namespace":"default","resourceVersion":"36352","selfLink":"/apis/networking.k8s.io/v1/namespaces/default/networkpolicies/redis-to-nginx","uid":"80ca9f38-bb42-11e8-ada4-0800279f8730"},"spec":{"egress":[{"to":[{"podSelector":{"matchLabels":{"app":"nginx"}}}]}],"podSelector":{"matchLabels":{"app":"redis"}},"policyTypes":["Ingress"]}}

According to https://kubernetes.io/docs/concepts/services-networking/network-policies/:

policyTypes: Each NetworkPolicy includes a policyTypes list which may include either Ingress, Egress, or both. The policyTypes field indicates whether or not the given policy applies to ingress traffic to selected pod, egress traffic from selected pods, or both. If no policyTypes are specified on a NetworkPolicy then by default Ingress will always be set and Egress will be set if the NetworkPolicy has any egress rules.

seems to be a bug in Kubernetes as it had to set "Egress" anyway.

To prevent from this happening, you should set policyTypes: [ Egress ] for the redis-to-nginx netpol.

@brb brb removed the bug label Sep 21, 2018
@murali-reddy
Copy link
Contributor

On recent version of kubernetes (1.14) I see PolicyType is set appropriately for the network policy redis-to-nginx

INFO: 2019/05/03 08:32:36.507805 EVENT DeleteNetworkPolicy {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"networking.k8s.io/v1\",\"kind\":\"NetworkPolicy\",\"metadata\":{\"annotations\":{},\"name\":\"redis-to-nginx\",\"namespace\":\"default\"},\"spec\":{\"egress\":[{\"to\":[{\"podSelector\":{\"matchLabels\":{\"app\":\"nginx\"}}}]}],\"podSelector\":{\"matchLabels\":{\"app\":\"redis\"}}}}\n"},"creationTimestamp":"2019-05-03T08:30:53Z","generation":1,"name":"redis-to-nginx","namespace":"default","resourceVersion":"122820","selfLink":"/apis/networking.k8s.io/v1/namespaces/default/networkpolicies/redis-to-nginx","uid":"c4134c1e-6d7d-11e9-a816-08002737ffe1"},"spec":{"egress":[{"to":[{"podSelector":{"matchLabels":{"app":"nginx"}}}]}],"podSelector":{"matchLabels":{"app":"redis"}},"policyTypes":["Ingress","Egress"]}}

Closing this issue.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants