Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Host Security Policy not enforced #1765

Open
tesla59 opened this issue May 24, 2024 · 7 comments · May be fixed by #1786
Open

Host Security Policy not enforced #1765

tesla59 opened this issue May 24, 2024 · 7 comments · May be fixed by #1786
Assignees
Labels
bug Something isn't working

Comments

@tesla59
Copy link
Contributor

tesla59 commented May 24, 2024

Bug Report

General Information

  • Environment description: k3s
  • Kernel version (run uname -a):
Linux thunderbird 6.9.1-arch1-1 #1 SMP PREEMPT_DYNAMIC Fri, 17 May 2024 16:56:38 +0000 x86_64 GNU/Linux
  • Orchestration system version in use (e.g. kubectl version, ...)
Client Version: v1.30.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.4+k3s1
  • Link to relevant artifacts (policies, deployments scripts, ...)
    Used KubeArmorHostPolicy
apiVersion: security.kubearmor.com/v1
kind: KubeArmorHostPolicy
metadata:
  name: hsp-thunderbird
spec:
  nodeSelector:
    matchLabels:
      kubernetes.io/os: linux
  process:
    matchPaths:
    - path: /usr/bin/pacman
    - path: /usr/bin/sleep
  action:
    Block
  • Target containers/pods
    N/A

To Reproduce

  1. Started a k3s cluster using
curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC="--disable=traefik --docker --container-runtime-endpoint unix:///var/run/docker.sock --kubelet-arg cgroup-driver=systemd" sh -
  1. Installed KubeArmor using
helm upgrade --install kubearmor-operator kubearmor/kubearmor-operator -n kubearmor --create-namespace
helm upgrade --install kubearmor kubearmor/kubearmor -n kubearmor --create-namespace 
  1. Enabled Host Visbility by editing the kubearmor daemonset (ref)
kubectl edit daemonsets.apps -n kubearmor kubearmor
#####
    spec:
      containers:
      - args:
        - -gRPC=32767
+       - -enableKubeArmorHostPolicy
+       - -hostVisibility=process,file,network,capabilities
#####
  1. Annotated the node using
kubectl annotate node thunderbird "kubearmor-visibility=process,file,network,capabilities"
  1. Confirmed both of them (Step 3 and Step 4) are using
$ kubectl logs -n kubearmor pod/kubearmor-jz644 | grep "Started to protect"
Defaulted container "kubearmor" out of: kubearmor, init (init)
2024-05-23 16:11:50.482130      INFO     Started to protect a host and containers

and

$ kubectl describe node thunderbird | grep kubearmor-visibility
                    kubearmor-visibility: process,file,network,capabilities
  1. Applied the above mentioned policy using kubectl apply -f

  2. Ran the commands mentioned in the policy

$ sleep                           
sleep: missing operand
Try 'sleep --help' for more information.
$ pacman
error: no operation specified (use -h for help)

Expected behavior

The execution of command pacman and sleep should be blocked and shown as Permission denied command terminated with exit code 126

Additional Info

  1. When checking the logs after applying the policy using k logs kubearmor-jz644 -n kubearmor, it shows
2024-05-24 00:28:03.149172      INFO     Detected a Host Security Policy (modified/hsp-thunderbird)

I t detects the policy but does not update it. In case of container enforcement, we see the following logs

2024-05-23 16:19:02.822584      INFO     Detected a Security Policy (added/default/ksp-nginx)
2024-05-23 16:19:02.822651      INFO     Updating container rules for 404efd73a323f7644c882ff037be71e17c9c05d7a65008f0d6a31f25791a14f3
  1. Output from karmor probe
$ karmor probe                       

Found KubeArmor running in Kubernetes

Daemonset :
     kubearmor       Desired: 1      Ready: 1        Available: 1    
Deployments : 
     kubearmor-operator      Desired: 1      Ready: 1        Available: 1    
        kubearmor-relay         Desired: 1      Ready: 1        Available: 1    
        kubearmor-controller    Desired: 1      Ready: 1        Available: 1    
Containers : 
     kubearmor-operator-5878ff8b8b-gkqk8     Running: 1      Image Version: kubearmor/kubearmor-operator:v1.3.4      
        kubearmor-relay-85646db78c-zqnvj        Running: 1      Image Version: kubearmor/kubearmor-relay-server:latest  
        kubearmor-controller-78b5859c9f-8tjk4   Running: 2      Image Version: kubearmor/kubearmor-controller:latest    
        kubearmor-jz644                         Running: 1      Image Version: kubearmor/kubearmor:stable               
Node 1 : 
     OS Image:                       Arch Linux                               
        Kernel Version:                 6.9.1-arch1-1                            
        Kubelet Version:                v1.29.4+k3s1                             
        Container Runtime:              docker://26.1.3                          
        Active LSM:                     BPFLSM                                   
        Host Security:                  true                                     
        Container Security:             true                                     
        Container Default Posture:      audit(File)                          audit(Capabilities)  audit(Network)       
        Host Default Posture:           audit(File)                          audit(Capabilities)  audit(Network)       
        Host Visibility:                process,file,network,capabilities        
Armored Up pods : 
+-----------+--------------------------------+-----------------------------------+------------------------+-----------+
| NAMESPACE |        DEFAULT POSTURE         |            VISIBILITY             |          NAME          |  POLICY   |
+-----------+--------------------------------+-----------------------------------+------------------------+-----------+
| default   | File(audit),                   | process,file,network,capabilities | nginx-7854ff8877-7pzvx | ksp-nginx |
|           | Capabilities(audit), Network   |                                   |                        |           |
|           | (audit)                        |                                   |                        |           |
+-----------+--------------------------------+-----------------------------------+------------------------+-----------+
  1. Container Enforcement is working. Running apt on mentioned nginx pod gives
kubectl exec -it $POD -- bash -c "apt update && apt install masscan"     
bash: line 1: /usr/bin/apt: Permission denied
command terminated with exit code 126
  1. karmor logs --logFilter=all shows HostLogs
  2. Path of both pacman and sleep is correct
@tesla59 tesla59 added the bug Something isn't working label May 24, 2024
@navin772
Copy link
Contributor

Tried reproducing the error, but HSP enforcement works on my machine with:

navin@localhost:~> uname -a
Linux localhost.localdomain 6.9.1-1-default #1 SMP PREEMPT_DYNAMIC Fri May 17 11:59:46 UTC 2024 (0c0b0b5) x86_64 x86_64 x86_64 GNU/Linux

OS - openSUSE Tumbleweed
Running kubectl version outputs:

Client Version: v1.30.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.4+k3s1

Applied the policy:

apiVersion: security.kubearmor.com/v1
kind: KubeArmorHostPolicy
metadata:
  name: hsp-sleep
spec:
  nodeSelector:
    matchLabels:
      kubernetes.io/os: linux
  process:
    matchPaths:
#    - path: /usr/bin/pacman
    - path: /usr/bin/sleep
  action:
    Block

Result:
bash: /usr/bin/sleep: Permission denied

Kubearmor logs of kubearmor-bpf-containerd pod:

2024-05-24 05:35:30.768729      INFO    Detected a Host Security Policy (added/hsp-sleep)
2024-05-24 05:35:30.768783      INFO    Updating host rules
2024-05-24 05:35:30.768801      INFO    Creating inner map for host
2024-05-24 05:35:42.621256      INFO    Detected a Host Security Policy (deleted/hsp-sleep)
2024-05-24 05:35:42.621295      INFO    Updating host rules
2024-05-24 05:35:42.621327      INFO    Deleting inner map for host

@tesla59
Copy link
Contributor Author

tesla59 commented May 24, 2024

I think that narrows down the bug to Arch Linux only since both runs are on same kernel version and same cluster version

@harisudarsan1
Copy link

@tesla59 can i know in which container you ran sleep?

@tesla59
Copy link
Contributor Author

tesla59 commented Jun 18, 2024

@harisudarsan1 on the host itself

@harisudarsan1
Copy link

harisudarsan1 commented Jun 18, 2024

kubearmor can't able to enforce if it is running inside k8s cluster. So try deploying wordpress
given in the examples directory in the kubearmor repo and check whether it is enforcing or no and also check karmor probe if policy is listed or not.

@DelusionalOptimist
Copy link
Member

DelusionalOptimist commented Jun 20, 2024

We discussed this further on slack - ref. This can be reproduced consistently with non-operator Kubernetes installation in environment which doesn't have neither apparmor nor selinux in the list of LSMs i.e. the output of LSM file looks like below

$ cat /sys/kernel/security/lsm
capability,landlock,lockdown,yama,bpf

For context - kubearmor-policy annotation is set at the node level to determine if the node has an LSM present and if enforcement is possible using it.

In KubeArmor we check for this condition here and you'll notice that there is no check for BPF LSM. So on systems which don't have apparmor, the annotation would be set to audited and application of policy will be skipped by KubeArmor based on this condition.

Things we need to fix are:

  • Incorrect conclusion on what enforcer is present from LSM file
  • Inability to override the above behavior by manually applying the kubearmor-policy annotation when KubeArmor is not run by the operator.

@tesla59
Copy link
Contributor Author

tesla59 commented Jun 27, 2024

Can confirm. The machine which the kubearmor is running on has active bpf but not apparmor

 ~/ cat /sys/kernel/security/lsm
capability,landlock,lockdown,yama,bpf

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants