Skip to content

james-s-tayler/redis-playground

Repository files navigation

Redis Playground

An observable dev environment for understanding Redis

This is a .Net Core 3.1 app using the StackExchange.Redis client library to communicate with redis and instrumented with OpenTelemetry + Jaeger for traces and Prometheus + Grafana for metrics all running in docker-compose.

Getting Started

Reading Materials on Redis

TL;DR Redis High Availability

  • Redis Sentinel
    • The goal of Redis Sentinel is to provide reliable automatic failover in a master/slave topology without sharding data
    • Sentinel is a separate process from the Redis server and it listens on its own port
    • sentinel.conf stores the location of the master node
      • this is updated/re-written on failover, or when new sentinel or slave joins
    • Communication between sentinels happens via a pub/sub channel on the master
    • Sentinels PING master, and if a quorom of sentinels don't receive PONG from the master after "down-after-milliseconds" amount of time then they trigger failover
    • CAP theorem analysis suggests Redis Sentinel is not strongly consistent in the case of network partition
    • Requires a client that supports Redis Sentinel
    • Developed and released before Redis Cluster when Antirez had less experience in distributed systems
  • Redis Cluster
    • The goal of Redis Cluster is to distribute data across different Redis instances and perform automatic failover if any problem happens to any master instance
    • Single process, but requires 2 ports (one for Redis server process, one for communicating between Redis instances)
    • Requires AT LEAST 3 masters to be considered healthy.
      • You should have at least one replica per master, otherwise if a master fails its data will be lost
      • Furthermore the entire cluster may become unavailable under the default configuration if no replicas are present
        • cluster-require-full-coverage is the configuration that controls this behavior.
          • it defaults to yes meaning the entire cluster becomes unavailable if some hash slots are not reachable
          • setting it to no means that queries routed to hash slots that have become unreachable simply return an error while queries routed to reachable hash slots remain available
      • In addition multiple replicas per master are recommended as that way when a master fails and one of its replicas is promoted to master then it still has a replica, otherwise it wouldn't have a replica and if that master were to fail it could make the cluster unavailable
        • to save on cost it is possible to configure most masters with only a single replica and one master with two replicas and provided cluster-migration-barrier is set to 1 when a master with only a single replica failed and failover was triggered then afterwards the "spare" replica from the master with multiple replicas would switch to become a replica of the newly promoted master
      • Promoting a slave to a master takes some time and data on the failed master will be unavailable for a short time until the failover is complete
    • All data is sharded across masters and replicated to slaves.
      • data is partitioned across 16,384 "hash slots" with each master owning a portion of the slots
      • var hashSlot = crc16(redisKey) % 16384
      • there is no automatic redistribution of slots across masters
        • by default they are distributed evenly across the masters when you create the cluster unless you specify otherwise
      • any multi-key operations require all the keys to be stored on the same node
        • "hash tags" can be used for this
          • The hash tag is delimited by curly brace and ensures the same hash slot is chosen for all keys
        • SADD {user123}:friends:usa "john"
        • SADD {user123}:friends:brazil "bruno"
        • SUNION {user123}:all_friends {user123}:friends:usa {user123}:friends:brazil
    • Requires client to support Redis Cluster
      • redis-cli requires passing -c to enable cluster mode, else it will treat Redis as a single instance
      • CAP Theorem analysis
        • also still not strongly consistent under network partition scenarios
  • RedisRaft

150+ Talks on Redis

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published