Understanding Automatic Failover Management With Redis Sentinel

0

Redis Sentinel Redis Sentinel, image credit - programmerall

Redis, also known as Remote Dictionary Server, is an advanced key-value pair datastore. It is so powerful and built by default to store data in memory, which results in super-fast read and write operations. Redis is an open-source project with a lot of contributors.

Since it was initially built to be lightweight, The master-slave architecture was used to achieve replication, data redundancy and scalability of the application. Here, a master is configured to have a minimum number of slaves - usually greater than 0, which is the default. The master Redis server handles all the write operations while the replicas handle the read operations. When a master fails, one of the replicas can be promoted to master because they contain the most recent data.

THE PROBLEM

When running the master-slave architecture in single instance mode, there is no automatic failover option where a Redis slave server can automatically be promoted to master status. The replicas and clients connected to the old master need to be reconfigured to the new master.

Consider a scenario where a Redis server is configured to be a master:

ezeugwagerrard-redis-master.png

This master is assigned two slave server instances like so:

ezeugwagerrard-redis-replica1.png

ezeugwagerrard-redis-replica2.png

When we launch the fourth terminal, we can test to see if our master-slave instance is working by setting a key-value pair:

ezeugwagerrard-redis-write-master.png

Then we can verify that the value “liverpoolfc” exists for the key “favoriteclub” on the slave servers by running the following commands in the figure below:

ezeugwagerrard-redis-get-r2.png

ezeugwagerrard-redis-get-r1.png

NB: It is best practice to carry out writes on the master and read-only on the slave instances. Now, what happens when a master fails? Let us feign that a master failed:

ezeugwagerrard-redis-killmaster.png

To test that the master is down, run:

ezeugwagerrard-redis-test-killedmaster.png

Now that the server is down, we need to promote any slave e.g 127.0.0.1:3001 to master status and make the other instance 127.0.0.1:3002 slave to it:

ezeugwagerrard-redis-promote-r1.png

ezeugwagerrard-redis-r2-slaveof-r1.png

Now we can test it out:

ezeugwagerrard-redis-set-newmaster.png

ezeugwagerrard-redis-get-newreplica.png

From this scenario, we can see the manual interventions needed to get the system up and running. What if we had more than 2 slaves? We have to repeat the process for each server instance. What if the master failed without our notice for a prolonged period of time?

ENTER, REDIS SENTINEL

Redis Sentinel is a system designed to automatically promote a slave to master status in the event that a master fails. What this means is, it takes out the manual reconfiguration of slaves to point to master and manages the whole process effectively.

To make this possible, Redis ships with a config file sentinel.conf where the sentinel configurations can be specified.

Only master nodes(redis-server) need to be specified. The slaves would find out about other slaves from the master when Sentinel starts. The config file would be updated automatically when Sentinel finds all her slaves, or on the event of a failover.

Let us demonstrate a very basic Sentinel setup to see how this works. Consider a minimal configuration file sentinel.conf like so:

port 5000
sentinel monitor masterone 127.0.0.1 3000 1
sentinel down-after-milliseconds masterone 5000
sentinel failover-timeout masterone 60000
sentinel parallel-syncs masterone 1

The configuration for the monitor statement is in the following format:

sentinel monitor <master-group-name> <ip> <port> <quorum>

Here, masterone is set to watch the master Redis server we created from our previous example at port 3000. masterone identifies the master and its replicas. Sentinel can monitor different masters and slaves at the same time.

According to the Redis Sentinel docs the quorum is the number of Sentinels that need to agree about the fact the master is not reachable, in order to really mark the master as failing, and eventually start a failover procedure if possible.

Here, the quorum is set to the value of 1 for demonstration purposes, but it is best practice to have this value set to a minimum of 2 sentinels.

The down-after-milliseconds value is 5000 milliseconds, that is after five seconds, the master is marked as failing and the failover procedure is started.

You can start the config file in sentinel mode with the --sentinel flag like so:

ezeugwagerrard-redis-sentinel.png

Sentinel masterone automatically discovers the master Redis server and her slaves. To view the address of the current master using the name we specified on the sentinel config, we do it like so:

ezeugwagerrard-redis-sentinel-address.png

We can see the current address of the Redis master server. Once again let us feign that master failed with the following command

redis-cli -p 3000 DEBUG sleep 30

ezeugwagerrard-redis-sentinel-promotion.png

We can see the slave at 127.0.0.1:3002 gets promoted to the status of master. So when we try to get the current address of the Redis master server, we see it has been updated:

ezeugwagerrard-redis-sentinel-promotion-update.png

This solves our initial problem of manually reconfiguring slaves to master Redis server instances.

NEXT STEPS

It is important to note that basic Redis Sentinel deployments should consist of at least 3 sentinel instances on 3 different machines that can fail independently, with a minimum quorum value of 2.

Even though Redis Sentinel solves our problem of failover automation and high availability, it does not distribute data across multiple Redis instances. Redis Cluster solves this problem.

More information about commands used to manage Redis Sentinel deployments can be found in the docs.

devops redissite reliability engineering

   


REFERENCES

Redis Sentinel Official Documentation - https://redis.io/topics/sentinel

Redis Essentials By Maxwell Dayvson Da Silva, Hugo Lopes Tavares

0 0

Leave a comment