-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom Storage Class Definition not working #102
Comments
Hello @renatowow14! I'm here to help you with any bugs, questions, or contributions you might have. Let's work together to solve the issue you're facing. If the custom storage class is not being applied to PostgreSQL, Redis, and ClickHouse as expected, there are a few things you can check:
If these steps do not resolve the issue, it might be helpful to look into the Helm chart's documentation or any known issues related to storage class configurations. If the cluster's default storage class is being used instead, it could indicate that the specified storage class is not being recognized or applied correctly. To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
in fact it was the indentation but s3 does not get the storageclass can you help me? I uploaded my values.yaml |
To configure the storage class for S3 in the Langfuse Helm chart, ensure your s3:
persistence:
storageClass: "minio-storage-class"
nameOverride: minio
bucket: <bucket>
region: <region>
endpoint: <endpoint>
forcePathStyle: true
accessKeyId:
value: "<access-key-id>"
secretAccessKey:
value: "<secret-access-key>"
eventUpload:
prefix: "events/"
batchExport:
prefix: "exports/"
mediaUpload:
prefix: "media/" Alternatively, you can configure the buckets, endpoints, etc., per upload type: s3:
nameOverride: minio
eventUpload:
bucket: <bucket>
endpoint: <endpoint>
region: <region>
accessKeyId:
value: "<access-key-id>"
secretAccessKey:
value: "<secret-access-key>"
prefix: "events/"
forcePathStyle: true
batchExport:
bucket: <bucket>
endpoint: <endpoint>
region: <region>
accessKeyId:
value: "<access-key-id>"
secretAccessKey:
value: "<secret-access-key>"
prefix: "exports/"
forcePathStyle: true
mediaUpload:
bucket: <bucket>
endpoint: <endpoint>
region: <region>
accessKeyId:
value: "<access-key-id>"
secretAccessKey:
value: "<secret-access-key>"
prefix: "media/"
forcePathStyle: true Make sure the indentation and syntax are correct in your To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
@renatowow14 Just to understand the current state: Everything works correctly aside from the S3 deployment? |
@Steffen911 Just now I managed to change the s3 storageclass but now I have another error:
In clickhouse I'm using it like this just changing storageclass: clickhouse: I don't want to use an external S3, I want to use the default one, but I want to change its storage class to one of my own. |
@renatowow14 This indicate missing configurations on the helm chart. Even if you use the included minio, most of those values need to be supplied. |
This works but only the minio got it:
global:
defaultStorageClass: "rook-ceph-block"
it doesn't work like this:
postgresql:
primary:
persistence:
storageClass: "rook-ceph-block"
redis:
primary:
persistence:
storageClass: "rook-ceph-block"
clickhouse:
persistence:
storageClass: "rook-ceph-block"
The text was updated successfully, but these errors were encountered: