-
Notifications
You must be signed in to change notification settings - Fork 223
timescale can't start up due to /var/lib/postgresql/data not created #72
Comments
well,when I chang to use ceph as storageClass,it can run successfully.Befor I use hostPath(not local volume in k8s 1.14) as static pv like below. apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: timescaledb-data-local-volume
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: timescaledb-wal-local-volume
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
so dose this mean hostPath is not support,,for I must use local nvme ssd on host |
I'm not sure why this is happening, the error message:
Occurs because the user For the failing deploy, could you share the following output?
|
While thinking about the above, I think the problem is with the permissions on the directory of the HostPath. The Docker Image is run as a non-root user, which means the container cannot change ownership of directories. For the dynamically provisioned Volumes (like your securityContext:
# The postgres user inside the TimescaleDB image has uid=1000.
# This configuration ensures the permissions of the mounts are suitable
fsGroup: 1000 For troubleshooting could you set the permissions of your HostPath to very liberally (even If that solves the problem, the only thing you may need to change is the owner |
The install command should not fail if permissions are correct or if the directories already exist. As a failure on the data directory or the wal directory will also cause Patroni and PostgreSQL to fail, it seems better to fail fast with the error message rather than to continue, which will clutter the logs with more error messages. For example in issue #72, the output would have been very clear that it is a permission problem.
Thanks for your replay. Change peermissions to 0777 did not help,but after use command like |
The install command should not fail if permissions are correct or if the directories already exist. As a failure on the data directory or the wal directory will also cause Patroni and PostgreSQL to fail, it seems better to fail fast with the error message rather than to continue, which will clutter the logs with more error messages. For example in issue #72, the output would have been very clear that it is a permission problem.
Hi mate, where you put this code line? I have the same error. |
Same here.. .where can you run that cmd. I tried to exec into the container, but it's not running |
I'm useing this helm chart to create a ha timescale cluster . But The timescale pod can't run
here is my helm values and pod log.
pod log
The text was updated successfully, but these errors were encountered: