Skip to content

Can't use Redis as cache server #2133

@chris-ng-1987

Description

@chris-ng-1987

HI, I am new to Cortex and working on a POC see how it work and suitable for my infrastructure,
while I am testing to config with Redis cache server I got the following errors when I do query.

level=error ts=2020-02-14T04:58:54.186907969Z caller=redis_cache.go:87 msg="failed to get from redis" name=store.index-cache-read.redis err="ERR wrong number of arguments for 'mget' command"
level=error ts=2020-02-14T04:58:54.187315046Z caller=redis_cache.go:87 msg="failed to get from redis" name=store.index-cache-read.redis err="ERR wrong number of arguments for 'mget' command"
 level=error ts=2020-02-14T04:58:54.187487524Z caller=redis_cache.go:87 msg="failed to get from redis" name=chunksredis err="ERR wrong number of arguments for 'mget' command"
 level=error ts=2020-02-14T04:58:54.189418228Z caller=redis_cache.go:87 msg="failed to get from redis" name=chunksredis err="ERR wrong number of arguments for 'mget' command"

I am using Cortex v0.6.1 and Redis server version 5

And this is the config file I am using.

apiVersion: v1
kind: ConfigMap
metadata:
  name: cortex
  namespace: kube-tools
data:
  single-process-config.yaml: |
    # Disable the requirement that every request to Cortex has a
    # X-Scope-OrgID header. `fake` will be substituted in instead.
    auth_enabled: false
    server:
      http_listen_port: 9009
      # Configure the server to allow messages up to 100MB.
      grpc_server_max_recv_msg_size: 104857600
      grpc_server_max_send_msg_size: 104857600
      grpc_server_max_concurrent_streams: 1000
    distributor:
      shard_by_all_labels: true
      pool:
        health_check_ingesters: true
    ingester_client:
      grpc_client_config:
        # Configure the client to allow messages up to 100MB.
        max_recv_msg_size: 104857600
        max_send_msg_size: 104857600
        use_gzip_compression: true
    ingester:
      #chunk_idle_period: 15m
      lifecycler:
        # The address to advertise for this ingester.  Will be autodiscovered by
        # looking up address on eth0 or en0; can be specified if this fails.
        # address: 127.0.0.1
        # We want to start immediately and flush on shutdown.
        join_after: 0
        claim_on_rollout: false
        final_sleep: 0s
        num_tokens: 512
        # Use an in memory ring store, so we don't need to launch a Consul.
        ring:
          kvstore:
            store: inmemory
          replication_factor: 1
    # Use local storage - BoltDB for the index, and the filesystem
    # for the chunks.
    schema:
      configs:
      - from: 2019-07-29
        store: boltdb
        object_store: filesystem
        schema: v10
        index:
          prefix: index_
          period: 168h
    storage:
      boltdb:
        directory: /tmp/cortex/index
      filesystem:
        directory: /tmp/cortex/chunks
      index_queries_cache_config:
        redis:
          endpoint: my-redis-host:6379
          password: xxxxx
    limits:
      ingestion_rate: 100000
      max_series_per_metric: 0
    chunk_store:
      chunk_cache_config:
        redis:
          endpoint: my-redis-host:6379
          password: xxxxx
      write_dedupe_cache_config:
        redis:
          endpoint: my-redis-host:6379
          password: xxxxx  
    query_range:
      results_cache:
        cache:
          redis:
            endpoint: my-redis-host:6379
            password: xxxxx

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions