Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom Storage Class Definition not working #102

Open
renatowow14 opened this issue Mar 14, 2025 · 6 comments
Open

Custom Storage Class Definition not working #102

renatowow14 opened this issue Mar 14, 2025 · 6 comments
Assignees
Labels
bug Something isn't working self-hosting

Comments

@renatowow14
Copy link

This works but only the minio got it:

global:
defaultStorageClass: "rook-ceph-block"

it doesn't work like this:

postgresql:
primary:
persistence:
storageClass: "rook-ceph-block"

redis:
primary:
persistence:
storageClass: "rook-ceph-block"

clickhouse:
persistence:
storageClass: "rook-ceph-block"

Image

Copy link

dosubot bot commented Mar 14, 2025

Hello @renatowow14! I'm here to help you with any bugs, questions, or contributions you might have. Let's work together to solve the issue you're facing.

If the custom storage class is not being applied to PostgreSQL, Redis, and ClickHouse as expected, there are a few things you can check:

  1. Ensure Correct Indentation and Syntax: YAML is sensitive to indentation. Make sure the indentation and syntax in your configuration file are correct.

  2. Verify Helm Chart Values: Double-check that the values are being correctly overridden in your Helm chart values file. Ensure that the paths and keys match the expected structure of the Helm chart.

  3. Check for Overrides: Ensure there are no other configurations or overrides elsewhere in your Helm chart that might be affecting the storage class settings.

  4. Helm Upgrade: After making changes to your values file, ensure you run helm upgrade to apply the changes to your Kubernetes cluster.

  5. Inspect Helm Templates: Use helm template to render the templates locally and verify that the storage class settings are being applied as expected in the generated Kubernetes manifests.

If these steps do not resolve the issue, it might be helpful to look into the Helm chart's documentation or any known issues related to storage class configurations. If the cluster's default storage class is being used instead, it could indicate that the specified storage class is not being recognized or applied correctly.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

@dosubot dosubot bot added the bug Something isn't working label Mar 14, 2025
@renatowow14
Copy link
Author

in fact it was the indentation but s3 does not get the storageclass can you help me? I uploaded my values.yaml

values.txt

Copy link

dosubot bot commented Mar 14, 2025

To configure the storage class for S3 in the Langfuse Helm chart, ensure your values.yaml file follows the correct structure. Here's the expected format for S3 configuration:

s3:
  persistence:
    storageClass: "minio-storage-class"
  nameOverride: minio
  bucket: <bucket>
  region: <region>
  endpoint: <endpoint>
  forcePathStyle: true
  accessKeyId:
    value: "<access-key-id>"
  secretAccessKey:
    value: "<secret-access-key>"
  eventUpload:
    prefix: "events/"
  batchExport:
    prefix: "exports/"
  mediaUpload:
    prefix: "media/"

Alternatively, you can configure the buckets, endpoints, etc., per upload type:

s3:
  nameOverride: minio
  eventUpload:
    bucket: <bucket>
    endpoint: <endpoint>
    region: <region>
    accessKeyId:
      value: "<access-key-id>"
    secretAccessKey:
      value: "<secret-access-key>"
    prefix: "events/"
    forcePathStyle: true
  batchExport:
    bucket: <bucket>
    endpoint: <endpoint>
    region: <region>
    accessKeyId:
      value: "<access-key-id>"
    secretAccessKey:
      value: "<secret-access-key>"
    prefix: "exports/"
    forcePathStyle: true
  mediaUpload:
    bucket: <bucket>
    endpoint: <endpoint>
    region: <region>
    accessKeyId:
      value: "<access-key-id>"
    secretAccessKey:
      value: "<secret-access-key>"
    prefix: "media/"
    forcePathStyle: true

Make sure the indentation and syntax are correct in your values.yaml file. If the storage class is still not being applied, verify that there are no conflicting configurations elsewhere in your Helm chart that might override these settings [1].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

@Steffen911
Copy link
Contributor

@renatowow14 Just to understand the current state: Everything works correctly aside from the S3 deployment?

@Steffen911 Steffen911 self-assigned this Mar 14, 2025
@renatowow14
Copy link
Author

renatowow14 commented Mar 14, 2025

@Steffen911 Just now I managed to change the s3 storageclass but now I have another error:

/app/node_modules/.pnpm/[email protected]/node_modules/zod/lib/types.js:160
        throw result.error;
        ^

ZodError: [
  {
    "code": "invalid_type",
    "expected": "string",
    "received": "undefined",
    "path": [
      "LANGFUSE_S3_EVENT_UPLOAD_BUCKET"
    ],
    "message": "Langfuse requires a bucket name for S3 Event Uploads."
  },
  {
    "code": "invalid_type",
    "expected": "string",
    "received": "undefined",
    "path": [
      "CLICKHOUSE_URL"
    ],
    "message": "Required"
  },
  {
    "code": "invalid_type",
    "expected": "string",
    "received": "undefined",
    "path": [
      "CLICKHOUSE_USER"
    ],
    "message": "Required"
  },
  {
    "code": "invalid_type",
    "expected": "string",
    "received": "undefined",
    "path": [
      "CLICKHOUSE_PASSWORD"
    ],
    "message": "Required"
  }
]
    at get error [as error] (/app/node_modules/.pnpm/[email protected]/node_modules/zod/lib/types.js:55:31)
    at ZodObject.parse (/app/node_modules/.pnpm/[email protected]/node_modules/zod/lib/types.js:160:22)
    at Object.<anonymous> (/app/worker/dist/env.js:201:17)
    at Module._compile (node:internal/modules/cjs/loader:1469:14)
    at Module._extensions..js (node:internal/modules/cjs/loader:1548:10)
    at Module.load (node:internal/modules/cjs/loader:1288:32)
    at Module._load (node:internal/modules/cjs/loader:1104:12)
    at Module.require (node:internal/modules/cjs/loader:1311:19)
    at Hook.Module.require (/app/node_modules/.pnpm/[email protected]/node_modules/dd-trace/packages/dd-trace/src/ritm.js:97:33)
    at require (node:internal/modules/helpers:179:18) {
  issues: [
    {
      code: 'invalid_type',
      expected: 'string',
      received: 'undefined',
      path: [ 'LANGFUSE_S3_EVENT_UPLOAD_BUCKET' ],
      message: 'Langfuse requires a bucket name for S3 Event Uploads.'
    },
    {
      code: 'invalid_type',
      expected: 'string',
      received: 'undefined',
      path: [ 'CLICKHOUSE_URL' ],
      message: 'Required'
    },
    {
      code: 'invalid_type',
      expected: 'string',
      received: 'undefined',
      path: [ 'CLICKHOUSE_USER' ],
      message: 'Required'
    },
    {
      code: 'invalid_type',
      expected: 'string',
      received: 'undefined',
      path: [ 'CLICKHOUSE_PASSWORD' ],
      message: 'Required'
    }
  ],
  addIssue: [Function (anonymous)],
  addIssues: [Function (anonymous)],
  errors: [
    {
      code: 'invalid_type',
      expected: 'string',
      received: 'undefined',
      path: [ 'LANGFUSE_S3_EVENT_UPLOAD_BUCKET' ],
      message: 'Langfuse requires a bucket name for S3 Event Uploads.'
    },
    {
      code: 'invalid_type',
      expected: 'string',
      received: 'undefined',
      path: [ 'CLICKHOUSE_URL' ],
      message: 'Required'
    },
    {
      code: 'invalid_type',
      expected: 'string',
      received: 'undefined',
      path: [ 'CLICKHOUSE_USER' ],
      message: 'Required'
    },
    {
      code: 'invalid_type',
      expected: 'string',
      received: 'undefined',
      path: [ 'CLICKHOUSE_PASSWORD' ],
      message: 'Required'
    }
  ]
}

Node.js v20.18.3

In clickhouse I'm using it like this just changing storageclass:

clickhouse:
deploy: true
persistence:
storageClass: "rook-ceph-block"
image:
pullPolicy: IfNotPresent

I don't want to use an external S3, I want to use the default one, but I want to change its storage class to one of my own.

@Steffen911
Copy link
Contributor

@renatowow14 This indicate missing configurations on the helm chart. Even if you use the included minio, most of those values need to be supplied.
Can you share a full version of the values.yaml you deploy and which version of the helm chart you use? With one of the most recent 1.0.0 release candidates this should be significantly simpler.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working self-hosting
Projects
None yet
Development

No branches or pull requests

2 participants