Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm: Incorrect compactor address with deploymentMode: Distributed #12646

Closed
sentoz opened this issue Apr 17, 2024 · 0 comments · Fixed by #12748
Closed

Helm: Incorrect compactor address with deploymentMode: Distributed #12646

sentoz opened this issue Apr 17, 2024 · 0 comments · Fixed by #12748
Labels
3.0 area/helm type/bug Somehing is not working as expected

Comments

@sentoz
Copy link
Contributor

sentoz commented Apr 17, 2024

Describe the bug
Incorrect compactor address in configuration when using deploymentMode: Distributed
The service is called loki-compactor, and in the configuration it is assigned http://loki:3100
In the code when generating loki.compactorAdress there is no condition for deploymentMode: Distributed.

To Reproduce
Steps to reproduce the behavior:

  1. Create values.yaml
values.yaml
# fullnameOverride: loki

deploymentMode: Distributed

loki:
  image:
    registry: docker.io
    repository: grafana/loki
    tag: 3.0.0
  revisionHistoryLimit: 5
  schemaConfig: 
    configs:
      - from: 2024-04-01
        store: tsdb
        object_store: s3
        schema: v13
        index:
          prefix: loki_index_
          period: 24h
  storage:
    type: s3
    s3:
      endpoint: s3.ltd
      secretAccessKey: ${LOKI_STORAGE_SECRET_ACCESS_KEY}
      accessKeyId: ${LOKI_STORAGE_ACCESS_KEY_ID}
    bucketNames:
      chunks: loki
  ingester:
    chunk_encoding: snappy
  tracing:
    enabled: false
  querier:
    # Default is 4, if you have enough memory and CPU you can increase, reduce if OOMing
    max_concurrent: 4

serviceAccount:
  create: true
  name: loki

minio:
  enabled: false

gateway:
  enabled: true

ingester:
  replicas: 3
  persistence:
    enabled: true
  zoneAwareReplication:
    enabled: false

distributor:
  replicas: 3
  maxUnavailable: 2

querier:
  replicas: 3
  maxUnavailable: 2
  persistence:
    enabled: true

queryFrontend:
  replicas: 2
  maxUnavailable: 1

queryScheduler:
  replicas: 2
  maxUnavailable: 1

indexGateway:
  replicas: 2
  maxUnavailable: 1

compactor:
  replicas: 1
  persistence:
    enabled: true

ruler:
  replicas: 1
  enabled: false

bloomCompactor:
  replicas: 0
bloomGateway:
  replicas: 0

memcachedExporter:
  enabled: false

resultsCache:
  enabled: false

chunksCache:
  enabled: false

monitoring:
  serviceMonitor:
    enabled: true

tableManager:
  enabled: false

backend:
  replicas: 0
read:
  replicas: 0
write:
  replicas: 0

singleBinary:
  replicas: 0
  1. Render helm
helm template --release-name loki grafana/loki \
  --namespace observability --version 6.2.0 --values ./values.yaml
  1. View the loki configmap, in config.yaml there is the value of the key common.compactor_address

Expected behavior
The loki.compactorAddress value should have been generated correctly. How this happens in the compactor template

Environment:

  • Infrastructure: Kubernetes
  • Deployment tool: helm

Screenshots, Promtail config, or terminal output

Current configuration when rendering helm

Configmap-loki.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: loki
  namespace: observability
  labels:
    helm.sh/chart: loki-6.2.0
    app.kubernetes.io/name: loki
    app.kubernetes.io/orig-instance: loki
    app.kubernetes.io/version: "3.0.0"
    app.kubernetes.io/managed-by: Helm
data:
  config.yaml: |
    
    auth_enabled: true
    common:
      compactor_address: 'http://loki:3100'
      path_prefix: /var/loki
      replication_factor: 3
      storage:
        s3:
          access_key_id: ${LOKI_STORAGE_ACCESS_KEY_ID}
          bucketnames: loki
          endpoint: s3.ltd
          insecure: false
          s3forcepathstyle: false
          secret_access_key: ${LOKI_STORAGE_SECRET_ACCESS_KEY}
    frontend:
      scheduler_address: loki-query-scheduler.observability.svc.cluster.local:9095
      tail_proxy_url: http://loki-querier.observability.svc.cluster.local:3100
    frontend_worker:
      scheduler_address: loki-query-scheduler.observability.svc.cluster.local:9095
    index_gateway:
      mode: simple
    ingester:
      chunk_encoding: snappy
    limits_config:
      max_cache_freshness_per_query: 10m
      query_timeout: 300s
      reject_old_samples: true
      reject_old_samples_max_age: 168h
      split_queries_by_interval: 15m
    memberlist:
      join_members:
      - loki-memberlist
    querier:
      max_concurrent: 4
    query_range:
      align_queries_with_step: true
    ruler:
      storage:
        s3:
          access_key_id: ${LOKI_STORAGE_ACCESS_KEY_ID}
          bucketnames: null
          endpoint: s3.ltd
          insecure: false
          s3forcepathstyle: false
          secret_access_key: ${LOKI_STORAGE_SECRET_ACCESS_KEY}
        type: s3
    runtime_config:
      file: /etc/loki/runtime-config/runtime-config.yaml
    schema_config:
      configs:
      - from: "2024-04-01"
        index:
          period: 24h
          prefix: loki_index_
        object_store: s3
        schema: v13
        store: tsdb
    server:
      grpc_listen_port: 9095
      http_listen_port: 3100
      http_server_read_timeout: 600s
      http_server_write_timeout: 600s
    storage_config:
      boltdb_shipper:
        index_gateway_client:
          server_address: dns+loki-index-gateway-headless.observability.svc.cluster.local:9095
      hedging:
        at: 250ms
        max_per_second: 20
        up_to: 3
      tsdb_shipper:
        index_gateway_client:
          server_address: dns+loki-index-gateway-headless.observability.svc.cluster.local:9095
    tracing:
      enabled: false
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3.0 area/helm type/bug Somehing is not working as expected
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants