You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# https://github.com/grafana/loki/blob/main/production/helm/loki/values.yaml
loki:
auth_enabled: false # "error from loki: no org id" - https://community.grafana.com/t/error-connecting-loki-data-source-to-kube-prometheus-stack/133296
schemaConfig:
configs:
- from: "2024-04-01"
store: tsdb
object_store: s3
schema: v13
index:
prefix: loki_index_
period: 24h
storage_config:
aws:
region: us-east-1 # will be overriden
bucketnames: loki-aws-dev-chunks-1 # will be overriden
s3forcepathstyle: false
ingester:
chunk_encoding: snappy
pattern_ingester:
enabled: true
limits_config:
allow_structured_metadata: true
volume_enabled: true
retention_period: 672h # 28 days retention
compactor:
retention_enabled: true
delete_request_store: s3
ruler:
enable_api: true
storage:
type: s3
s3:
region: us-east-1 # will be overriden
bucketnames: loki-aws-dev-chunks-2 # will be overriden
s3forcepathstyle: false
alertmanager_url: http://prom:9093 # The URL of the Alertmanager to send alerts (Prometheus, Mimir, etc.)
querier:
max_concurrent: 4
storage:
type: s3
bucketNames:
chunks: "loki-aws-dev-chunks-3" # will be overrided
ruler: "loki-aws-dev-chunks-4" # will be overrided
region: "us-east-1" # will be overrided
s3forcepathstyle: false
# admin: "<Insert s3 bucket name>" # Your actual S3 bucket name (loki-aws-dev-admin) - GEL customers only
s3:
region: us-east-1 # will be overriden
#insecure: false
# s3forcepathstyle: false
serviceAccount:
create: true
name: loki-sa
annotations:
"eks.amazonaws.com/role-arn": "arn:aws:iam::<Account ID>:role/LokiServiceAccountRole" # will be overrided
deploymentMode: Distributed
ingester:
replicas: 1 # 3
persistence:
storageClass: gp3
accessModes:
- ReadWriteOnce
size: 10Gi
querier:
replicas: 1 # 3
# maxUnavailable: 2
persistence:
storageClass: gp3
accessModes:
- ReadWriteOnce
size: 10Gi
queryFrontend:
replicas: 1 # 2
maxUnavailable: 1
queryScheduler:
replicas: 1 # 2
distributor:
replicas: 1 # 3
# maxUnavailable: 2
compactor:
replicas: 1
persistence:
storageClass: gp3
accessModes:
- ReadWriteOnce
size: 10Gi
shared_store: s3
compaction_interval: 10m
retention_enabled: true
retention_delete_delay: 2h
retention_delete_worker_count: 150
working_directory: /var/loki/compactor
indexGateway:
replicas: 1 # 2
# maxUnavailable: 1
persistence:
storageClass: gp3
accessModes:
- ReadWriteOnce
size: 10Gi
ruler:
replicas: 1
# maxUnavailable: 1
persistence:
storageClass: gp3
accessModes:
- ReadWriteOnce
size: 10Gi
backend:
replicas: 0
read:
replicas: 0
write:
replicas: 0
singleBinary:
replicas: 0
# https://github.com/grafana/loki/issues/9849
test:
enabled: false
lokiCanary:
enabled: false
The EKS we have does not have active OIDC URL as we are using pod identity
I notice that loki.storage.bucketNames.region is currently us-east-1. Is it related to compactor connection to the bucket? For the note I override loki.storage_config.aws.region and loki.ruler.storage.s3.region to cn-northwest-1 which is the correct region
The text was updated successfully, but these errors were encountered:
gn-hiro-v
changed the title
[BUG] Loki compactor in AWS CN EKS crashes with InvalidToken: The provided token is malformed or otherwise invalid
[BUG] Compactor in AWS CN EKS with S3 - crashes with InvalidToken: The provided token is malformed or otherwise invalidDec 12, 2024
Describe the bug
compactor
pod crashes and here is the errorTo Reproduce
Steps to reproduce the behavior:
https://grafana.github.io/helm-charts
withloki
chart version6.23.0
Expected behavior
compactor
pod works fineEnvironment:
cn-northwest-1
)Screenshots, Promtail config, or terminal output
If applicable, add any output to help explain your problem.
loki.storage.bucketNames.region
is currentlyus-east-1
. Is it related to compactor connection to the bucket? For the note I overrideloki.storage_config.aws.region
andloki.ruler.storage.s3.region
tocn-northwest-1
which is the correct regionThe text was updated successfully, but these errors were encountered: