Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DietPi OS cannot install #742

Open
nikoxp opened this issue Dec 12, 2024 · 28 comments
Open

DietPi OS cannot install #742

nikoxp opened this issue Dec 12, 2024 · 28 comments

Comments

@nikoxp
Copy link

nikoxp commented Dec 12, 2024

Wish DietPi can support。

@eball
Copy link
Collaborator

eball commented Dec 13, 2024

Hi, @nikoxp We would appreciate it if you could provide some log information about the failed installation on DietPi including your SBC device information. We'll do some tests later. THX

@nikoxp
Copy link
Author

nikoxp commented Dec 13, 2024

the KUBE_TYPE env var is not set, defaulting to "k3s"

olares-cli already installed and is the expected version

downloading installation wizard...

current: root
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x7488a0]

goroutine 1 [running]:
go.uber.org/zap.(*SugaredLogger).log(0x0, 0x5, {0xc000058a20?, 0x0?}, {0xc00083fc58?, 0x670f4efe00000000?, 0xc00083fb68?}, {0x0, 0x0, 0x0})
/home/runner/go/pkg/mod/go.uber.org/[email protected]/sugar.go:354 +0xa0
go.uber.org/zap.(*SugaredLogger).Fatalf(...)
/home/runner/go/pkg/mod/go.uber.org/[email protected]/sugar.go:235
bytetrade.io/web3os/installer/pkg/core/logger.Fatalf(...)
/home/runner/work/Installer/Installer/pkg/core/logger/logger.go:214
bytetrade.io/web3os/installer/cmd/ctl/os.NewCmdDownloadWizard.func1(0xc000141d00?, {0x1fd7ac9?, 0x4?, 0x1fd7acd?})
/home/runner/work/Installer/Installer/cmd/ctl/os/download.go:48 +0x99
github.com/spf13/cobra.(*Command).execute(0xc00065b808, {0xc00087fa80, 0x8, 0x8})
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:987 +0xab1
github.com/spf13/cobra.(*Command).ExecuteC(0xc0001bdb08)
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1115 +0x3ff
github.com/spf13/cobra.(*Command).Execute(0xc0007ab760?)
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1039 +0x13
main.main()
/home/runner/work/Installer/Installer/cmd/main.go:15 +0xd8

@eball
Copy link
Collaborator

eball commented Dec 13, 2024

the KUBE_TYPE env var is not set, defaulting to "k3s"

olares-cli already installed and is the expected version

downloading installation wizard...

current: root panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x7488a0]

goroutine 1 [running]: go.uber.org/zap.(*SugaredLogger).log(0x0, 0x5, {0xc000058a20?, 0x0?}, {0xc00083fc58?, 0x670f4efe00000000?, 0xc00083fb68?}, {0x0, 0x0, 0x0}) /home/runner/go/pkg/mod/go.uber.org/[email protected]/sugar.go:354 +0xa0 go.uber.org/zap.(*SugaredLogger).Fatalf(...) /home/runner/go/pkg/mod/go.uber.org/[email protected]/sugar.go:235 bytetrade.io/web3os/installer/pkg/core/logger.Fatalf(...) /home/runner/work/Installer/Installer/pkg/core/logger/logger.go:214 bytetrade.io/web3os/installer/cmd/ctl/os.NewCmdDownloadWizard.func1(0xc000141d00?, {0x1fd7ac9?, 0x4?, 0x1fd7acd?}) /home/runner/work/Installer/Installer/cmd/ctl/os/download.go:48 +0x99 github.com/spf13/cobra.(*Command).execute(0xc00065b808, {0xc00087fa80, 0x8, 0x8}) /home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:987 +0xab1 github.com/spf13/cobra.(*Command).ExecuteC(0xc0001bdb08) /home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1115 +0x3ff github.com/spf13/cobra.(*Command).Execute(0xc0007ab760?) /home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1039 +0x13 main.main() /home/runner/work/Installer/Installer/cmd/main.go:15 +0xd8

It seems the DietOS does not get sudo to be installed. Could you please try to run

apt install sudo

then re-install the Olares ?

@nikoxp
Copy link
Author

nikoxp commented Dec 13, 2024

Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
sudo is already the newest version (1.9.5p2-3+deb11u1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

@nikoxp
Copy link
Author

nikoxp commented Dec 13, 2024

the KUBE_TYPE env var is not set, defaulting to "k3s"
olares-cli already installed and is the expected version
downloading installation wizard...
current: root panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x7488a0]
goroutine 1 [running]: go.uber.org/zap.(*SugaredLogger).log(0x0, 0x5, {0xc000058a20?, 0x0?}, {0xc00083fc58?, 0x670f4efe00000000?, 0xc00083fb68?}, {0x0, 0x0, 0x0}) /home/runner/go/pkg/mod/go.uber.org/[email protected]/sugar.go:354 +0xa0 go.uber.org/zap.(*SugaredLogger).Fatalf(...) /home/runner/go/pkg/mod/go.uber.org/[email protected]/sugar.go:235 bytetrade.io/web3os/installer/pkg/core/logger.Fatalf(...) /home/runner/work/Installer/Installer/pkg/core/logger/logger.go:214 bytetrade.io/web3os/installer/cmd/ctl/os.NewCmdDownloadWizard.func1(0xc000141d00?, {0x1fd7ac9?, 0x4?, 0x1fd7acd?}) /home/runner/work/Installer/Installer/cmd/ctl/os/download.go:48 +0x99 github.com/spf13/cobra.(*Command).execute(0xc00065b808, {0xc00087fa80, 0x8, 0x8}) /home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:987 +0xab1 github.com/spf13/cobra.(*Command).ExecuteC(0xc0001bdb08) /home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1115 +0x3ff github.com/spf13/cobra.(*Command).Execute(0xc0007ab760?) /home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1039 +0x13 main.main() /home/runner/work/Installer/Installer/cmd/main.go:15 +0xd8

It seems the DietOS does not get sudo to be installed. Could you please try to run

apt install sudo

then re-install the Olares ?

same

@eball
Copy link
Collaborator

eball commented Dec 13, 2024

We'll do some tests on DietPi later. Could you please provide more information, like the DietPi version, the SBC device info, etc. ?

@nikoxp
Copy link
Author

nikoxp commented Dec 13, 2024

DietPi v9.8.0 : 18:52 - Fri 12/13/24
─────────────────────────────────────────────────────

  • Device model : Virtual Machine (x86_64)

@nikoxp
Copy link
Author

nikoxp commented Dec 19, 2024

0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
W: Target Packages (non-free/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list:1 and /etc/apt/sources.list:5
W: Target Packages (non-free/binary-all/Packages) is configured multiple times in /etc/apt/sources.list:1 and /etc/apt/sources.list:5
W: Target Packages (contrib/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list:1 and /etc/apt/sources.list:5
W: Target Packages (contrib/binary-all/Packages) is configured multiple times in /etc/apt/sources.list:1 and /etc/apt/sources.list:5
/bin/bash: line 1: update-pciids: command not found
2024-12-19T15:02:41.218+0800 dp-olares78 failed to update-pciids: Failed to exec command: /bin/sh -c sudo -E /bin/bash -c "update-pciids"
/bin/bash: line 1: update-pciids: command not found: exit status 127
2024-12-19T15:02:41.218+0800 [A] dp-olares78: PatchOs failed (56.93040774s)
2024-12-19T15:02:41.218+0800 [Job] [Prepare the System Environment] execute failed
2024/12/19 15:02:41 error: Module[InstallDeps] exec failed:
failed - dp-olares78: [A] PatchOs: PatchOs exec failed after 3 retires: failed to update-pciids: Failed to exec command: /bin/sh -c sudo -E /bin/bash -c "update-pciids"
/bin/bash: line 1: update-pciids: command not found: exit status 127

@eball
Copy link
Collaborator

eball commented Dec 19, 2024

It looks like pciutils installed on your system is too old. You can install it (https://github.com/pciutils/pciutils/blob/master/update-pciids.sh) manually to work around.

wget -O /usr/local/bin/update-pciids https://github.com/pciutils/pciutils/blob/master/update-pciids.sh && \
chmod +x /usr/local/bin/update-pciids

@nikoxp
Copy link
Author

nikoxp commented Dec 19, 2024

unzip is already the newest version (6.0-26+deb11u1).
apache2-utils is already the newest version (2.4.62-1~deb11u2).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
W: Target Packages (non-free/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list:1 and /etc/apt/sources.list:5
W: Target Packages (non-free/binary-all/Packages) is configured multiple times in /etc/apt/sources.list:1 and /etc/apt/sources.list:5
W: Target Packages (contrib/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list:1 and /etc/apt/sources.list:5
W: Target Packages (contrib/binary-all/Packages) is configured multiple times in /etc/apt/sources.list:1 and /etc/apt/sources.list:5
/usr/local/bin/update-pciids: line 7: syntax error near unexpected token newline' /usr/local/bin/update-pciids: line 7: '
2024-12-19T15:58:05.281+0800 dp-olares78 failed to update-pciids: Failed to exec command: /bin/sh -c sudo -E /bin/bash -c "update-pciids"
/usr/local/bin/update-pciids: line 7: syntax error near unexpected token newline' /usr/local/bin/update-pciids: line 7: ': exit status 2
2024-12-19T15:58:05.281+0800 [A] dp-olares78: PatchOs failed (1m10.118861651s)
2024-12-19T15:58:05.281+0800 [Job] [Prepare the System Environment] execute failed
2024/12/19 15:58:05 error: Module[InstallDeps] exec failed:
failed - dp-olares78: [A] PatchOs: PatchOs exec failed after 3 retires: failed to update-pciids: Failed to exec command: /bin/sh -c sudo -E /bin/bash -c "update-pciids"
/usr/local/bin/update-pciids: line 7: syntax error near unexpected token newline' /usr/local/bin/update-pciids: line 7: ': exit status 2

@eball
Copy link
Collaborator

eball commented Dec 19, 2024

Sorry,

It should be

wget -O https://raw.githubusercontent.com/pciutils/pciutils/refs/heads/master/update-pciids.sh && \
chmod +x /usr/local/bin/update-pciids

@nikoxp
Copy link
Author

nikoxp commented Dec 20, 2024

wget -O /usr/local/bin/update-pciids https://raw.githubusercontent.com/pciutils/pciutils/refs/heads/master/update-pciids.sh && chmod +x /usr/local/bin/update-pciids

@nikoxp
Copy link
Author

nikoxp commented Dec 20, 2024

2024-12-20T08:31:36.208+0800 [A] dp-olares78: ApplyKsInstaller success (458.016306ms)
2024-12-20T08:31:36.208+0800 [Module] DeployKsPlugins
2024-12-20T08:31:36.442+0800 [A] dp-olares78: CheckNodeState success (233.487124ms)
Error from server (AlreadyExists): namespaces "kubesphere-controls-system" already exists
Error from server (AlreadyExists): namespaces "kubesphere-monitoring-federated" already exists
namespace/default not labeled
namespace/default not labeled
namespace/kube-node-lease not labeled
namespace/kube-node-lease not labeled
namespace/kube-public not labeled
namespace/kube-public not labeled
namespace/kube-system not labeled
namespace/kube-system not labeled
namespace/kubekey-system not labeled
namespace/kubekey-system not labeled
namespace/kubesphere-controls-system not labeled
namespace/kubesphere-controls-system not labeled
namespace/kubesphere-monitoring-federated not labeled
namespace/kubesphere-monitoring-federated not labeled
namespace/kubesphere-monitoring-system not labeled
namespace/kubesphere-monitoring-system not labeled
namespace/kubesphere-system not labeled
namespace/kubesphere-system not labeled
2024-12-20T08:31:41.650+0800 [A] dp-olares78: InitKsNamespace success (5.208485386s)
2024-12-20T08:31:41.650+0800 [Module] DeploySnapshotController
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io configured
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io configured
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io configured
2024-12-20T08:31:42.107+0800 dp-olares78 cannot re-use a name that is still in use
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io configured
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io configured
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io configured
2024-12-20T08:31:47.658+0800 dp-olares78 cannot re-use a name that is still in use
2024-12-20T08:31:47.658+0800 [A] dp-olares78: CreateSnapshotController failed (6.006961715s)
2024-12-20T08:31:47.658+0800 [Job] [Install the System] execute failed
2024/12/20 08:31:47 error: Module[DeploySnapshotController] exec failed:
failed - dp-olares78: [A] CreateSnapshotController: CreateSnapshotController exec failed after 2 retires: cannot re-use a name that is still in use

@dkeven
Copy link
Contributor

dkeven commented Dec 20, 2024

hey @nikoxp, the logs you pasted says the failure is caused by trying to create some K8s resources that already exists, thus conflicts occurred, this might be that you have executed the install script multiple times, do you have the logs of your last installation?

if not, you can execute sudo olares-cli olares uninstall -b $HOME/.olares --phase install to rollback the partially installed cluster, and then execute the install script again.

and don't worry, the --phase install option means only the K8-related stuff are to be removed, the other dependencies installed in the previous steps are not affected.

@nikoxp
Copy link
Author

nikoxp commented Dec 20, 2024

2024-12-20T10:28:16.553+0800 [A] dp-olares78: CreateKubeMonitor success (576.130039ms)
alertmanager.monitoring.coreos.com/main created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/alertmanager-main created
prometheusrule.monitoring.coreos.com/alertmanager-main-rules created
secret/alertmanager-main created
service/alertmanager-main created
serviceaccount/alertmanager-main created
servicemonitor.monitoring.coreos.com/alertmanager-main created
2024-12-20T10:28:17.077+0800 [A] dp-olares78: CreateAlertManager success (523.769879ms)
2024-12-20T10:28:17.077+0800 [Module] DeployKsCore
2024-12-20T10:28:17.348+0800 dp-olares78 Redis State Pending
2024-12-20T10:28:27.616+0800 dp-olares78 Redis State Pending
2024-12-20T10:28:37.870+0800 dp-olares78 Redis State Pending
2024-12-20T10:28:48.150+0800 dp-olares78 Redis State Pending
2024-12-20T10:28:58.409+0800 dp-olares78 Redis State Pending
2024-12-20T10:29:08.679+0800 dp-olares78 Redis State Pending
2024-12-20T10:29:18.955+0800 dp-olares78 Redis State Pending
2024-12-20T10:29:29.258+0800 dp-olares78 Redis State Pending
2024-12-20T10:29:39.528+0800 dp-olares78 Redis State Pending
2024-12-20T10:29:49.845+0800 dp-olares78 Redis State Pending
2024-12-20T10:29:49.845+0800 [A] dp-olares78: CreateKsCore failed (1m32.767388064s)
2024-12-20T10:29:49.845+0800 [Job] [Install the System] execute failed
2024/12/20 10:29:49 error: Module[DeployKsCore] exec failed:
failed - dp-olares78: [A] CreateKsCore: CreateKsCore exec failed after 10 retires: Redis State Pending

@dkeven
Copy link
Contributor

dkeven commented Dec 20, 2024

2024-12-20T10:28:16.553+0800 [A] dp-olares78: CreateKubeMonitor success (576.130039ms) alertmanager.monitoring.coreos.com/main created Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget poddisruptionbudget.policy/alertmanager-main created prometheusrule.monitoring.coreos.com/alertmanager-main-rules created secret/alertmanager-main created service/alertmanager-main created serviceaccount/alertmanager-main created servicemonitor.monitoring.coreos.com/alertmanager-main created 2024-12-20T10:28:17.077+0800 [A] dp-olares78: CreateAlertManager success (523.769879ms) 2024-12-20T10:28:17.077+0800 [Module] DeployKsCore 2024-12-20T10:28:17.348+0800 dp-olares78 Redis State Pending 2024-12-20T10:28:27.616+0800 dp-olares78 Redis State Pending 2024-12-20T10:28:37.870+0800 dp-olares78 Redis State Pending 2024-12-20T10:28:48.150+0800 dp-olares78 Redis State Pending 2024-12-20T10:28:58.409+0800 dp-olares78 Redis State Pending 2024-12-20T10:29:08.679+0800 dp-olares78 Redis State Pending 2024-12-20T10:29:18.955+0800 dp-olares78 Redis State Pending 2024-12-20T10:29:29.258+0800 dp-olares78 Redis State Pending 2024-12-20T10:29:39.528+0800 dp-olares78 Redis State Pending 2024-12-20T10:29:49.845+0800 dp-olares78 Redis State Pending 2024-12-20T10:29:49.845+0800 [A] dp-olares78: CreateKsCore failed (1m32.767388064s) 2024-12-20T10:29:49.845+0800 [Job] [Install the System] execute failed 2024/12/20 10:29:49 error: Module[DeployKsCore] exec failed: failed - dp-olares78: [A] CreateKsCore: CreateKsCore exec failed after 10 retires: Redis State Pending

the Redis pod is not starting normally, you can check its state by kubectl -n kubesphere-system describe pod -l app=redis

@nikoxp
Copy link
Author

nikoxp commented Dec 20, 2024

root@dp-olares78:~# kubectl -n kubesphere-system describe pod -l app=redis
Name: redis-d744b7468-pvs6w
Namespace: kubesphere-system
Priority: 0
Node:
Labels: app=redis
pod-template-hash=d744b7468
tier=database
version=redis-4.0
Annotations:
Status: Pending
IP:
IPs:
Controlled By: ReplicaSet/redis-d744b7468
Init Containers:
init:
Image: redis:5.0.14-alpine
Port:
Host Port:
Command:
sh
-c
cat /tmp/redis/redis.conf | sed "s/REDIS_PASSWORD/$KUBESPHERE_REDIS_PASSWORD/" > /data/redis.conf
Environment:
KUBESPHERE_REDIS_PASSWORD: <set to the key 'auth' in secret 'redis-secret'> Optional: false
Mounts:
/data from redis-pvc (rw,path="redis-data")
/tmp/redis from redis-config (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5wm8k (ro)
Containers:
redis:
Image: redis:5.0.14-alpine
Port: 6379/TCP
Host Port: 0/TCP
Args:
/data/redis.conf
Limits:
cpu: 1
memory: 1000Mi
Requests:
cpu: 20m
memory: 100Mi
Environment:
Mounts:
/data from redis-pvc (rw,path="redis-data")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5wm8k (ro)
Volumes:
redis-pvc:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: redis-pvc
ReadOnly: false
redis-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: redis-configmap
Optional: false
kube-api-access-5wm8k:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors:
Tolerations: CriticalAddonsOnly op=Exists
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:

@dkeven
Copy link
Contributor

dkeven commented Dec 20, 2024

root@dp-olares78:~# kubectl -n kubesphere-system describe pod -l app=redis Name: redis-d744b7468-pvs6w Namespace: kubesphere-system Priority: 0 Node: Labels: app=redis pod-template-hash=d744b7468 tier=database version=redis-4.0 Annotations: Status: Pending IP: IPs: Controlled By: ReplicaSet/redis-d744b7468 Init Containers: init: Image: redis:5.0.14-alpine Port: Host Port: Command: sh -c cat /tmp/redis/redis.conf | sed "s/REDIS_PASSWORD/$KUBESPHERE_REDIS_PASSWORD/" > /data/redis.conf Environment: KUBESPHERE_REDIS_PASSWORD: <set to the key 'auth' in secret 'redis-secret'> Optional: false Mounts: /data from redis-pvc (rw,path="redis-data") /tmp/redis from redis-config (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5wm8k (ro) Containers: redis: Image: redis:5.0.14-alpine Port: 6379/TCP Host Port: 0/TCP Args: /data/redis.conf Limits: cpu: 1 memory: 1000Mi Requests: cpu: 20m memory: 100Mi Environment: Mounts: /data from redis-pvc (rw,path="redis-data") /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5wm8k (ro) Volumes: redis-pvc: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: redis-pvc ReadOnly: false redis-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: redis-configmap Optional: false kube-api-access-5wm8k: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: CriticalAddonsOnly op=Exists node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events:

it seems like the pod is stuck in the state Pending, which means it has not been scheduled for the whole time, this might be an issue with the K3s core service.

you can execute sudo journalctl -u k3s -S -20m to view the logs of K3s, and look for suspicious logs, especially the ones that seems relevant to schedule.

@nikoxp
Copy link
Author

nikoxp commented Dec 20, 2024

log.txt
Dec 20 10:32:17 dp-olares78 k3s[56983]: E1220 10:32:17.990298 56983 remote_runtime.go:228] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: get apparmor_parser version: exec: "apparmor_parser": executable file not found in $PATH" podSandboxID="fd92e9a958459bd3c5d4488074edfe374411ae4e4701da7c07a0a28e90eee619"
Dec 20 10:32:17 dp-olares78 k3s[56983]: E1220 10:32:17.990833 56983 kuberuntime_manager.go:864] container &Container{Name:openebs-provisioner-hostpath,Image:openebs/provisioner-localpv:3.3.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPENEBS_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPENEBS_SERVICE_ACCOUNT,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.serviceAccountName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPENEBS_IO_ENABLE_ANALYTICS,Value:true,ValueFrom:nil,},EnvVar{Name:OPENEBS_IO_INSTALLER_TYPE,Value:openebs-operator-lite,ValueFrom:nil,},EnvVar{Name:OPENEBS_IO_HELPER_IMAGE,Value:openebs/linux-utils:3.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-27dp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c test $(pgrep -c "^provisioner-loc.*") = 1],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod openebs-localpv-provisioner-7f95444755-5znhm_kube-system(48b67202-6838-47e8-80c0-64cd885e9ea4): CreateContainerError: failed to create containerd container: get apparmor_parser version: exec: "apparmor_parser": executable file not found in $PATH
Dec 20 10:32:17 dp-olares78 k3s[56983]: E1220 10:32:17.991020 56983 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "openebs-provisioner-hostpath" with CreateContainerError: "failed to create containerd container: get apparmor_parser version: exec: \"apparmor_parser\": executable file not found in $PATH"" pod="kube-system/openebs-localpv-provisioner-7f95444755-5znhm" podUID=48b67202-6838-47e8-80c0-64cd885e9ea4
Dec 20 10:32:20 dp-olares78 k3s[56983]: E1220 10:32:20.993775 56983 remote_runtime.go:228] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: get apparmor_parser version: exec: "apparmor_parser": executable file not found in $PATH" podSandboxID="c44f3e1df634692a68c523440b85c2c47ad583bdbc614cb21465a1b45f6d470c"
Dec 20 10:32:20 dp-olares78 k3s[56983]: E1220 10:32:20.994769 56983 kuberuntime_manager.go:864] container &Container{Name:prometheus-operator,Image:kubesphere/prometheus-operator:v0.55.1,Command:[],Args:[--kubelet-service=kube-system/kubelet --prometheus-config-reloader=kubesphere/prometheus-config-reloader:v0.55.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{104857600 0} {} 100Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k2rw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod prometheus-operator-5cb4879d77-kb5xj_kubesphere-monitoring-system(d1cbb08a-8886-473a-84ca-5ebd68846d73): CreateContainerError: failed to create containerd container: get apparmor_parser version: exec: "apparmor_parser": executable file not found in $PATH
Dec 20 10:32:21 dp-olares78 k3s[56983]: E1220 10:32:21.003317 56983 remote_runtime.go:228] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: get apparmor_parser version: exec: "apparmor_parser": executable file not found in $PATH" podSandboxID="c44f3e1df634692a68c523440b85c2c47ad583bdbc614cb21465a1b45f6d470c"

@dkeven
Copy link
Contributor

dkeven commented Dec 20, 2024

that's the cause of it, the apparmor is a Linux security module needed by containerd to run containers, but your system does not have it, normally we'll install a missing apparmor_parser program on tested distros like Ubuntu, you can check the existence of it under $HOME/.olares/pkg/components/apparmor_4.0.1-0ubuntu1.deb

But I'm not sure whether this file alone is compatible and enough with your system even though they are both based on Debian, you can give dpkg -i $HOME/.olares/pkg/components/apparmor_4.0.1-0ubuntu1.deb a shot, for what it's worth.

if that fails, maybe try sudo apt install apparmor apparmor-utils

@nikoxp
Copy link
Author

nikoxp commented Dec 20, 2024

Install the System execute successfully!!!

@dkeven
Copy link
Contributor

dkeven commented Dec 20, 2024

Install the System execute successfully!!!

very glad to hear that, are you able to log in to the activation page with the credentials printed in the console?

@nikoxp
Copy link
Author

nikoxp commented Dec 20, 2024

udev 10M 0 10M 0% /dev
/dev/sda1 119G 44G 70G 39% /
tmpfs 5.9G 0 5.9G 0% /dev/shm
tmpfs 2.4G 96M 2.3G 5% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.0G 75M 950M 8% /tmp
tmpfs 50M 7.6M 43M 16% /var/log
tmpfs 5.9G 12K 5.9G 1% /var/lib/kubele

@nikoxp
Copy link
Author

nikoxp commented Dec 20, 2024

Install the System execute successfully!!!

very glad to hear that, are you able to log in to the activation page with the credentials printed in the console?

yes

@nikoxp
Copy link
Author

nikoxp commented Jan 7, 2025

install version 1.11 error:

url -fsSL https://olares.sh | bash -
the KUBE_TYPE env var is not set, defaulting to "k3s"

olares-cli already installed and is the expected version

file /root/.olares/.prepared detected, skip preparing phase

installing Olares...

current: root
2025-01-07T11:31:16.956+0800 unable to read manifest, open /root/.olares/versions/v1.11.0/installation.manifest: no such file or directory
2025-01-07T11:31:16.956+0800 [FATAL] open /root/.olares/versions/v1.11.0/installation.manifest: no such file or directory
bytetrade.io/web3os/installer/pkg/phase/cluster.InstallSystemPhase
/home/runner/work/Installer/Installer/pkg/phase/cluster/install.go:18
bytetrade.io/web3os/installer/pkg/pipelines.CliInstallTerminusPipeline
/home/runner/work/Installer/Installer/pkg/pipelines/install_terminus.go:45
bytetrade.io/web3os/installer/cmd/ctl/os.NewCmdInstallOs.func1
/home/runner/work/Installer/Installer/cmd/ctl/os/install.go:26
github.com/spf13/cobra.(*Command).execute
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:987
github.com/spf13/cobra.(*Command).ExecuteC
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1115
github.com/spf13/cobra.(*Command).Execute
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1039
main.main
/home/runner/work/Installer/Installer/cmd/main.go:15
runtime.main
/opt/hostedtoolcache/go/1.22.4/x64/src/runtime/proc.go:271

@eball
Copy link
Collaborator

eball commented Jan 7, 2025

@nikoxp It seems you did not uninstall the previous version Olares before you installed the new version.

you can execute

olares-uninstall.sh

then redo the

curl -fsSL https://olares.sh/ | bash -

@nikoxp
Copy link
Author

nikoxp commented Jan 7, 2025

serviceaccount/alertmanager-main created
servicemonitor.monitoring.coreos.com/alertmanager-main created
2025-01-07T12:44:40.262+0800 [A] dp-olares78: CreateAlertManager success (671.92779ms)
2025-01-07T12:44:40.262+0800 [Module] DeployKsCore
2025-01-07T12:44:40.766+0800 dp-olares78 Redis State Pending
2025-01-07T12:44:51.396+0800 dp-olares78 Redis State Pending
2025-01-07T12:45:01.929+0800 dp-olares78 Redis State Pending
2025-01-07T12:45:12.264+0800 dp-olares78 Redis State Pending
2025-01-07T12:45:22.659+0800 dp-olares78 Redis State Pending
2025-01-07T12:45:32.955+0800 dp-olares78 Redis State Pending
2025-01-07T12:45:43.294+0800 dp-olares78 Redis State Pending
2025-01-07T12:45:53.567+0800 dp-olares78 Redis State Pending
2025-01-07T12:46:03.859+0800 dp-olares78 Redis State Pending
2025-01-07T12:46:14.144+0800 dp-olares78 Redis State Pending
2025-01-07T12:46:14.144+0800 [A] dp-olares78: CreateKsCore failed (1m33.881698681s)
2025-01-07T12:46:14.145+0800 [Job] [Install the System] execute failed
2025/01/07 12:46:14 error: Module[DeployKsCore] exec failed:
failed - dp-olares78: [A] CreateKsCore: CreateKsCore exec failed after 10 retires: Redis State Pending

@dkeven
Copy link
Contributor

dkeven commented Jan 7, 2025

serviceaccount/alertmanager-main created servicemonitor.monitoring.coreos.com/alertmanager-main created 2025-01-07T12:44:40.262+0800 [A] dp-olares78: CreateAlertManager success (671.92779ms) 2025-01-07T12:44:40.262+0800 [Module] DeployKsCore 2025-01-07T12:44:40.766+0800 dp-olares78 Redis State Pending 2025-01-07T12:44:51.396+0800 dp-olares78 Redis State Pending 2025-01-07T12:45:01.929+0800 dp-olares78 Redis State Pending 2025-01-07T12:45:12.264+0800 dp-olares78 Redis State Pending 2025-01-07T12:45:22.659+0800 dp-olares78 Redis State Pending 2025-01-07T12:45:32.955+0800 dp-olares78 Redis State Pending 2025-01-07T12:45:43.294+0800 dp-olares78 Redis State Pending 2025-01-07T12:45:53.567+0800 dp-olares78 Redis State Pending 2025-01-07T12:46:03.859+0800 dp-olares78 Redis State Pending 2025-01-07T12:46:14.144+0800 dp-olares78 Redis State Pending 2025-01-07T12:46:14.144+0800 [A] dp-olares78: CreateKsCore failed (1m33.881698681s) 2025-01-07T12:46:14.145+0800 [Job] [Install the System] execute failed 2025/01/07 12:46:14 error: Module[DeployKsCore] exec failed: failed - dp-olares78: [A] CreateKsCore: CreateKsCore exec failed after 10 retires: Redis State Pending

Hey, seems like a familiar issue in this thread, the Redis pod is not starting normally, you can check its state by kubectl -n kubesphere-system describe pod -l app=redis

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants