Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Query does not fit in a single sharding configuration" after changing schema to TSDB #10771

Closed
nektarios-d opened this issue Oct 3, 2023 · 7 comments · Fixed by #13029
Closed

Comments

@nektarios-d
Copy link

Describe the bug
After changing schema config from boltdb-shipper to TSDB we get the following errors in the logs when we do a query.
ts=2023-10-03T07:45:36.818777014Z caller=spanlogger.go:86 user=fake level=error msg="failed to get schema config, not applying querySizeLimit" err="Query does not fit in a single sharding configuration"
ts=2023-10-03T07:45:36.818998318Z caller=spanlogger.go:86 middleware=QueryShard.astMapperware org_id=fake traceID=4a4b53d9c04c37e4 org_id=fake traceID=4a4b53d9c04c37e4 level=warn err="Query does not fit in a single sharding configuration" msg="skipped AST mapper for request"

To Reproduce
Steps to reproduce the behavior:

  1. Add config for TSDB according to https://grafana.com/docs/loki/latest/operations/storage/tsdb/
  2. Restart loki container
  3. After the from date in the config issue a query to Loki

Expected behavior
Loki to run without errors

Environment:

  • Infrastructure: docker
  • Loki version 2.9.1

Relevant Config:

limits_config:
  split_queries_by_interval: 10h
  ingestion_rate_mb: 128
  ingestion_burst_size_mb: 256
  max_streams_per_user: 0
  max_global_streams_per_user: 0
  per_stream_rate_limit: 80MB
  per_stream_rate_limit_burst: 100MB
  enforce_metric_name: false
  reject_old_samples: true
  reject_old_samples_max_age: 168h
  max_query_series: 100000
  max_query_parallelism: 6
  tsdb_max_query_parallelism: 512 # default

query_scheduler:
  max_outstanding_requests_per_tenant: 32768

querier:
  query_ingesters_within: 7h
  max_concurrent: 16

schema_config:
  configs:
    - from: 2023-01-01
      store: boltdb-shipper
      object_store: aws
      schema: v11
      index:
        prefix: loki_index_
        period: 24h
    - from: "2023-10-03" # <---- A date in the future. The date we switch to TSDB from UTC 00:00:0
      index:
        period: 24h
        prefix: loki_index_
      object_store: aws
      schema: v12 # Current recommended schema version
      store: tsdb
@marcusteixeira
Copy link
Contributor

marcusteixeira commented Oct 3, 2023

I think I can add some comments on this.

I updated the version from 2.8 to 2.9.1, to use the multi-store index-support concept.

Previously I was already using TSDB Shipper and this error was not caused.

So I believe that this correlation with the problem is not explicitly correlated with the inclusion of TSDB, but rather with the update to version 2.9.

image

image

level.Error(log).Log("msg", "failed to get schema config, not applying querySizeLimit", "err", err)

@marcusteixeira
Copy link
Contributor

@nektarios-d

How are your configurations related to storage_config?

shared_store assignments?

@nektarios-d
Copy link
Author

nektarios-d commented Oct 3, 2023

@nektarios-d

How are your configurations related to storage_config?

shared_store assignments?
@marcusteixeira

storage_config:
  aws:
    s3: "s3://<redacted>:<redacted>@eu-west-2/<bucket_name>
  boltdb_shipper:
    active_index_directory: /data/loki/index
    shared_store: s3
    cache_ttl: 24h
    cache_location: /data/loki/boltdb-cache
  tsdb_shipper:
    active_index_directory: /data/loki/tsdb-index
    cache_location: /data/loki/tsdb-cache
    cache_ttl: 24h                           # default
    shared_store: s3

@kadhamecha-conga
Copy link

i'm also facing the same issue post migration to TSDB. i'm using loki 2.8.2

@darxriggs
Copy link
Contributor

I am also seeing the reported error message with Loki 2.9.3 and a changed schema.

It seems, except this error message, that the query results are fine nevertheless.

Looking into the code and the description in pull request #9050 which introduced it, helps to better understand it.
This scenario can for example happen when querying a time range which spans different schemas.
When only querying a time range which spans either the old or the new schema, no error is logged.

@trevorwhitney can you confirm this that this is not an issue in the described scenario?

@lindeskar
Copy link
Contributor

We saw the same error today (should it be a warning?) after change from v12 to v13 in preparation for Loki 3.0:

- from: 2024-04-25
  store: tsdb
  object_store: gcs
  schema: v13
  index:
    prefix: loki_index_
    period: 24h

@trevorwhitney
Copy link
Collaborator

This should probably be a warning level log message

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants