Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,7 @@
* [CHANGE] Query Frontend: instant queries now honor the `-querier.max-retries-per-request` flag. #630
* [CHANGE] Alertmanager: removed `-alertmanager.storage.*` configuration options, with the exception of the CLI flags `-alertmanager.storage.path` and `-alertmanager.storage.retention`. Use `-alertmanager-storage.*` instead. #632
* [CHANGE] Ingester: active series metrics `cortex_ingester_active_series` and `cortex_ingester_active_series_custom_tracker` are now removed when their value is zero. #672 #690
* [CHANGE] Querier / ruler: removed the `-store.query-chunk-limit` flag (and its respective YAML config option `max_chunks_per_query`). `-querier.max-fetched-chunks-per-query` (and its respective YAML config option `max_fetched_chunks_per_query`) should be used instead. #705
* [FEATURE] Query Frontend: Add `cortex_query_fetched_chunks_total` per-user counter to expose the number of chunks fetched as part of queries. This metric can be enabled with the `-frontend.query-stats-enabled` flag (or its respective YAML config option `query_stats_enabled`). #31
* [FEATURE] Query Frontend: Add experimental querysharding for the blocks storage (instant and range queries). You can now enable querysharding for blocks storage (`-store.engine=blocks`) by setting `-query-frontend.parallelize-shardable-queries` to `true`. The following additional config and exported metrics have been added. #79 #80 #100 #124 #140 #148 #150 #151 #153 #154 #155 #156 #157 #158 #159 #160 #163 #169 #172 #196 #205 #225 #226 #227 #228 #230 #235 #240 #239 #246 #244 #319 #330 #371 #385 #400 #458 #586 #630 #660
* New config options:
Expand Down
14 changes: 2 additions & 12 deletions docs/configuration/config-file-reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -3335,21 +3335,11 @@ The `limits_config` configures default and per-tenant limits imposed by services
# CLI flag: -ingester.max-global-exemplars-per-user
[max_global_exemplars_per_user: <int> | default = 0]

# Deprecated. Use -querier.max-fetched-chunks-per-query CLI flag and its
# respective YAML config option instead. Maximum number of chunks that can be
# fetched in a single query. This limit is enforced when fetching chunks from
# the long-term storage only. When using chunks storage, this limit is enforced
# in the querier and ruler, while when using blocks storage this limit is
# enforced in the querier, ruler and store-gateway. 0 to disable.
# CLI flag: -store.query-chunk-limit
[max_chunks_per_query: <int> | default = 2000000]

# Maximum number of chunks that can be fetched in a single query from ingesters
# and long-term storage. This limit is enforced in the querier, ruler and
# store-gateway. Takes precedence over the deprecated -store.query-chunk-limit.
# 0 to disable.
# store-gateway. 0 to disable.
# CLI flag: -querier.max-fetched-chunks-per-query
[max_fetched_chunks_per_query: <int> | default = 0]
[max_fetched_chunks_per_query: <int> | default = 2000000]
Copy link
Contributor Author

@andyasp andyasp Jan 6, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't sure if changing this default was correct or not. max_fetched_chunks_per_query previously could not have had a default, since if it was set it would override max_chunks_per_query. With that entanglement now gone, I thought it made sense to inherit the default rather than be left unlimited.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think setting this to the 2M is limit is correct for how this this limit behaves today. In blocks_store_queryable we call MaxChunksPerQueryFromStore which would call the limits.MaxChunksPerQueryFromStore code below. Since we have max_fetched_chunks_per_query set to 0 (disabled) by default with the logic in MaxChunksPerQueryFromStore we would get the max_chunks_per_query limit of 2M since max_fetched_chunks_per_query is not greater than 0.

However, I think the original intention was for this limit to be disabled by default. So maybe 0 is what we want here. 2M chunks seems to be a large amount of chunks so I don't really see us hitting that limit often if ever today so I don't think it would effect anything to disable this limit.

I'm also only looking at the blocks path here since the intention is to remove the chunks storage path.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually in looking at the original PR I see Marco say:

(in my opinion having the limit applied to ingesters too is better than having a true global limit, considering ingesters are currently unbounded, but I'm open to feedback).

So maybe the intention here is to make sure that limit is carried through to introduce a limit to the ingesters and storegateway, so then 2M would be correct as the default.


# The maximum number of unique series for which a query can fetch samples from
# each ingesters and blocks storage. This limit is enforced in the querier only
Expand Down
2 changes: 1 addition & 1 deletion pkg/chunk/chunk_store.go
Original file line number Diff line number Diff line change
Expand Up @@ -360,7 +360,7 @@ func (c *store) getMetricNameChunks(ctx context.Context, userID string, from, th
filtered := filterChunksByTime(from, through, chunks)
level.Debug(log).Log("Chunks post filtering", len(chunks))

maxChunksPerQuery := c.limits.MaxChunksPerQueryFromStore(userID)
maxChunksPerQuery := c.limits.MaxChunksPerQuery(userID)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather than fixing this code (and other files in this package), we should remove it. But until then, this is fine.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I agree, it felt a little fruitless making some of these edits with the removal pending. I did it in this order to help myself understand it in isolation. I've been looking at the query path removal story, so if that goes well hopefully most of the package will be deleted soon.

if maxChunksPerQuery > 0 && len(filtered) > maxChunksPerQuery {
err := QueryError(fmt.Sprintf("Query %v fetched too many chunks (%d > %d)", allMatchers, len(filtered), maxChunksPerQuery))
level.Error(log).Log("err", err)
Expand Down
2 changes: 1 addition & 1 deletion pkg/chunk/composite_store.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ import (

// StoreLimits helps get Limits specific to Queries for Stores
type StoreLimits interface {
MaxChunksPerQueryFromStore(userID string) int
MaxChunksPerQuery(userID string) int
MaxQueryLength(userID string) time.Duration
}

Expand Down
2 changes: 1 addition & 1 deletion pkg/chunk/series_store.go
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ func (c *seriesStore) Get(ctx context.Context, userID string, from, through mode
chunks := chks[0]
fetcher := fetchers[0]
// Protect ourselves against OOMing.
maxChunksPerQuery := c.limits.MaxChunksPerQueryFromStore(userID)
maxChunksPerQuery := c.limits.MaxChunksPerQuery(userID)
if maxChunksPerQuery > 0 && len(chunks) > maxChunksPerQuery {
err := QueryError(fmt.Sprintf("Query %v fetched too many chunks (%d > %d)", allMatchers, len(chunks), maxChunksPerQuery))
level.Error(log).Log("err", err)
Expand Down
2 changes: 1 addition & 1 deletion pkg/chunk/storage/factory.go
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ func RegisterIndexStore(name string, indexClientFactory IndexClientFactoryFunc,
// StoreLimits helps get Limits specific to Queries for Stores
type StoreLimits interface {
CardinalityLimit(userID string) int
MaxChunksPerQueryFromStore(userID string) int
MaxChunksPerQuery(userID string) int
MaxQueryLength(userID string) time.Duration
}

Expand Down
4 changes: 2 additions & 2 deletions pkg/querier/blocks_store_queryable.go
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ type BlocksStoreLimits interface {
bucket.TenantConfigProvider

MaxLabelsQueryLength(userID string) time.Duration
MaxChunksPerQueryFromStore(userID string) int
MaxChunksPerQuery(userID string) int
StoreGatewayTenantShardSize(userID string) int
}

Expand Down Expand Up @@ -452,7 +452,7 @@ func (q *blocksStoreQuerier) selectSorted(sp *storage.SelectHints, matchers ...*
resSeriesSets = []storage.SeriesSet(nil)
resWarnings = storage.Warnings(nil)

maxChunksLimit = q.limits.MaxChunksPerQueryFromStore(q.userID)
maxChunksLimit = q.limits.MaxChunksPerQuery(q.userID)
leftChunksLimit = maxChunksLimit

resultMtx sync.Mutex
Expand Down
2 changes: 1 addition & 1 deletion pkg/querier/blocks_store_queryable_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -1761,7 +1761,7 @@ func (m *blocksStoreLimitsMock) MaxLabelsQueryLength(_ string) time.Duration {
return m.maxLabelsQueryLength
}

func (m *blocksStoreLimitsMock) MaxChunksPerQueryFromStore(_ string) int {
func (m *blocksStoreLimitsMock) MaxChunksPerQuery(_ string) int {
return m.maxChunksPerQuery
}

Expand Down
2 changes: 1 addition & 1 deletion pkg/storegateway/bucket_stores.go
Original file line number Diff line number Diff line change
Expand Up @@ -622,7 +622,7 @@ func newChunksLimiterFactory(limits *validation.Overrides, userID string) Chunks
// Since limit overrides could be live reloaded, we have to get the current user's limit
// each time a new limiter is instantiated.
return &chunkLimiter{
limiter: NewLimiter(uint64(limits.MaxChunksPerQueryFromStore(userID)), failedCounter),
limiter: NewLimiter(uint64(limits.MaxChunksPerQuery(userID)), failedCounter),
}
}
}
2 changes: 1 addition & 1 deletion pkg/storegateway/gateway_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -1193,7 +1193,7 @@ func TestStoreGateway_SeriesQueryingShouldEnforceMaxChunksPerQueryLimit(t *testi
t.Run(testName, func(t *testing.T) {
// Customise the limits.
limits := defaultLimitsConfig()
limits.MaxChunksPerQueryFromStore = testData.limit
limits.MaxChunksPerQuery = testData.limit
overrides, err := validation.NewOverrides(limits, nil)
require.NoError(t, err)

Expand Down
16 changes: 1 addition & 15 deletions pkg/util/validation/limits.go
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,6 @@ type Limits struct {
MaxGlobalExemplarsPerUser int `yaml:"max_global_exemplars_per_user" json:"max_global_exemplars_per_user"`

// Querier enforced limits.
MaxChunksPerQueryFromStore int `yaml:"max_chunks_per_query" json:"max_chunks_per_query"` // TODO Remove in Cortex 1.12.
MaxChunksPerQuery int `yaml:"max_fetched_chunks_per_query" json:"max_fetched_chunks_per_query"`
MaxFetchedSeriesPerQuery int `yaml:"max_fetched_series_per_query" json:"max_fetched_series_per_query"`
MaxFetchedChunkBytesPerQuery int `yaml:"max_fetched_chunk_bytes_per_query" json:"max_fetched_chunk_bytes_per_query"`
Expand Down Expand Up @@ -168,8 +167,7 @@ func (l *Limits) RegisterFlags(f *flag.FlagSet) {
f.IntVar(&l.MaxGlobalMetadataPerMetric, "ingester.max-global-metadata-per-metric", 0, "The maximum number of metadata per metric, across the cluster. 0 to disable.")
f.IntVar(&l.MaxGlobalExemplarsPerUser, "ingester.max-global-exemplars-per-user", 0, "The maximum number of exemplars in memory, across the cluster. 0 to disable exemplars ingestion.")

f.IntVar(&l.MaxChunksPerQueryFromStore, "store.query-chunk-limit", 2e6, "Deprecated. Use -querier.max-fetched-chunks-per-query CLI flag and its respective YAML config option instead. Maximum number of chunks that can be fetched in a single query. This limit is enforced when fetching chunks from the long-term storage only. When using chunks storage, this limit is enforced in the querier and ruler, while when using blocks storage this limit is enforced in the querier, ruler and store-gateway. 0 to disable.")
f.IntVar(&l.MaxChunksPerQuery, "querier.max-fetched-chunks-per-query", 0, "Maximum number of chunks that can be fetched in a single query from ingesters and long-term storage. This limit is enforced in the querier, ruler and store-gateway. Takes precedence over the deprecated -store.query-chunk-limit. 0 to disable.")
f.IntVar(&l.MaxChunksPerQuery, "querier.max-fetched-chunks-per-query", 2e6, "Maximum number of chunks that can be fetched in a single query from ingesters and long-term storage. This limit is enforced in the querier, ruler and store-gateway. 0 to disable.")
f.IntVar(&l.MaxFetchedSeriesPerQuery, "querier.max-fetched-series-per-query", 0, "The maximum number of unique series for which a query can fetch samples from each ingesters and blocks storage. This limit is enforced in the querier only when running with blocks storage. 0 to disable")
f.IntVar(&l.MaxFetchedChunkBytesPerQuery, "querier.max-fetched-chunk-bytes-per-query", 0, "The maximum size of all chunks in bytes that a query can fetch from each ingester and storage. This limit is enforced in the querier and ruler only when running with blocks storage. 0 to disable.")
f.Var(&l.MaxQueryLength, "store.max-query-length", "Limit the query time range (end - start time). This limit is enforced in the query-frontend (on the received query), in the querier (on the query possibly split by the query-frontend) and in the chunks storage. 0 to disable.")
Expand Down Expand Up @@ -422,18 +420,6 @@ func (o *Overrides) MaxGlobalSeriesPerMetric(userID string) int {
return o.getOverridesForUser(userID).MaxGlobalSeriesPerMetric
}

// MaxChunksPerQueryFromStore returns the maximum number of chunks allowed per query when fetching
// chunks from the long-term storage.
func (o *Overrides) MaxChunksPerQueryFromStore(userID string) int {
// If the new config option is set, then it should take precedence.
if value := o.getOverridesForUser(userID).MaxChunksPerQuery; value > 0 {
return value
}

// Fallback to the deprecated config option.
return o.getOverridesForUser(userID).MaxChunksPerQueryFromStore
}

func (o *Overrides) MaxChunksPerQuery(userID string) int {
return o.getOverridesForUser(userID).MaxChunksPerQuery
}
Expand Down
38 changes: 0 additions & 38 deletions pkg/util/validation/limits_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@ import (
"testing"
"time"

"github.com/grafana/dskit/flagext"
"github.com/prometheus/common/model"
"github.com/prometheus/prometheus/model/relabel"
"github.com/stretchr/testify/assert"
Expand Down Expand Up @@ -76,43 +75,6 @@ func TestLimits_Validate(t *testing.T) {
}
}

func TestOverrides_MaxChunksPerQueryFromStore(t *testing.T) {
tests := map[string]struct {
setup func(limits *Limits)
expected int
}{
"should return the default legacy setting with the default config": {
setup: func(limits *Limits) {},
expected: 2000000,
},
"the new config option should take precedence over the deprecated one": {
setup: func(limits *Limits) {
limits.MaxChunksPerQueryFromStore = 10
limits.MaxChunksPerQuery = 20
},
expected: 20,
},
"the deprecated config option should be used if the new config option is unset": {
setup: func(limits *Limits) {
limits.MaxChunksPerQueryFromStore = 10
},
expected: 10,
},
}

for testName, testData := range tests {
t.Run(testName, func(t *testing.T) {
limits := Limits{}
flagext.DefaultValues(&limits)
testData.setup(&limits)

overrides, err := NewOverrides(limits, nil)
require.NoError(t, err)
assert.Equal(t, testData.expected, overrides.MaxChunksPerQueryFromStore("test"))
})
}
}

func TestOverridesManager_GetOverrides(t *testing.T) {
tenantLimits := map[string]*Limits{}

Expand Down