Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,8 @@
* [FEATURE] Added readiness probe endpoint`/ready` to queriers. #1934
* [FEATURE] EXPERIMENTAL: Added `/series` API endpoint support with TSDB blocks storage. #1830
* [FEATURE] Added "multi" KV store that can interact with two other KV stores, primary one for all reads and writes, and secondary one, which only receives writes. Primary/secondary store can be modified in runtime via runtime-config mechanism (previously "overrides"). #1749
* [FEATURE] EXPERIMENTAL: close stale TSDBs after a time period of not receiving writes. #1958
* `--experimental.tsdb.max-stale-age`
* [ENHANCEMENT] Added `password` and `enable_tls` options to redis cache configuration. Enables usage of Microsoft Azure Cache for Redis service.
* [ENHANCEMENT] Experimental TSDB: Open existing TSDB on startup to prevent ingester from becoming ready before it can accept writes. #1917
* `--experimental.tsdb.max-tsdb-opening-concurrency-on-startup`
Expand Down
7 changes: 7 additions & 0 deletions pkg/ingester/ingester.go
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@ type ingesterMetrics struct {
queriedSamples prometheus.Histogram
queriedSeries prometheus.Histogram
queriedChunks prometheus.Histogram
cleanupDuration prometheus.Histogram
}

func newIngesterMetrics(r prometheus.Registerer) *ingesterMetrics {
Expand Down Expand Up @@ -81,6 +82,11 @@ func newIngesterMetrics(r prometheus.Registerer) *ingesterMetrics {
// A small number of chunks per series - 10*(8^(7-1)) = 2.6m.
Buckets: prometheus.ExponentialBuckets(10, 8, 7),
}),
cleanupDuration: prometheus.NewHistogram(prometheus.HistogramOpts{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we could convert it into a metric to track the duration of 1 single TSDB offload operation? Two reasons:

  1. Right now the duration depends on how many TSDBs are closed in a single cleanup operation
  2. From the UX, it's not much clear what "cleanup" really does

Name: "cortex_ingester_cleanup_duration",
Help: "The time it takes to perform a cleanup operation.",
Buckets: prometheus.DefBuckets,
}),
}

if r != nil {
Expand All @@ -92,6 +98,7 @@ func newIngesterMetrics(r prometheus.Registerer) *ingesterMetrics {
m.queriedSamples,
m.queriedSeries,
m.queriedChunks,
m.cleanupDuration,
)
}

Expand Down
Loading