You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is an example of a good starting point for a production RabbitMQ deployment.
4
-
It deploys a 3-node cluster with sufficient resources to handle 1 billion messages per day at 8kB payload and a replication factor of three.
5
-
The rest of the workload details are outlined in the monthly cost savings calculator on https://rabbitmq.com/tanzu
3
+
Before you can deploy this RabbitMQ cluster, you will need a multi-zone Kubernetes cluster with at least 3 worker nodes with 4 CPUs and 10Gi RAM on each node available for RabbitMQ.
6
4
7
-
Please keep in mind that:
8
-
9
-
1. It may not be suitable for **your** production deployment.
10
-
The official [RabbitMQ Production Checklist](https://www.rabbitmq.com/production-checklist.html) will help you with some of these considerations.
11
-
12
-
2. While it is important to correctly deploy RabbitMQ cluster for production workloads, it is equally important for your applications to use RabbitMQ correctly.
13
-
[Production Checklist](https://www.rabbitmq.com/production-checklist.html) covers some of the common issues such as connection churn and polling consumers.
14
-
This example was tested with [Quorum Queues](https://www.rabbitmq.com/quorum-queues.html) which provide excellent data safety for workloads that require message replication.
15
-
16
-
Before you can deploy this RabbitMQ cluster, you will need a multi-zone Kubernetes cluster with at least 3 nodes, 12 CPUs, 30Gi RAM and 1.5Ti disk space available.
17
5
A `storageClass` named `ssd` will need to be defined too.
18
-
We have [a GKE-specific example](ssd-gke.yaml) included in this example.
19
-
Read more about the expected disk performance [in Google Cloud Documentation](https://cloud.google.com/compute/docs/disks/performance#ssd_persistent_disk).
20
-
For what it's worth, disk write throughput is the limiting factor for persistent messages with a payload of 8kB.
6
+
Feel free to use the [GKE-specific example](ssd-gke.yaml) included in this example for reference.
7
+
Each RabbitMQ node will provision a 500Gi persistent volume of type `ssd`.
8
+
9
+
This configuration is a requirement for sustaining 1 billion persistent messages per day of 8kB payload each and a replication factor of three using [quorum queues](https://www.rabbitmq.com/quorum-queues.html).
21
10
22
11
To deploy this RabbitMQ cluster, run the following:
This example is a good starting point for a production RabbitMQ deployment and it may not be suitable for **your use-case**.
19
+
We needed a RabbitMQ cluster that can sustain 1 billion persistent messages per day at 8kB payload and a replication factor of three using [quorum queues](https://www.rabbitmq.com/quorum-queues.html).
20
+
The rest of the workload details are outlined in this [monthly cost savings calculator](https://rabbitmq.com/tanzu#calculator).
21
+
22
+
While a RabbitMQ cluster with sufficient resources is important for production, it is equally important for your applications to use RabbitMQ correctly.
23
+
Applications that open & close connections frequently, polling consumers and consuming one message at a time are common issues that make RabbitMQ "slow".
24
+
25
+
The official [Production Checklist](https://www.rabbitmq.com/production-checklist.html) will help you optimise RabbitMQ for your use-case.
26
+
27
+
28
+
29
29
## Q & A
30
30
31
-
### Is 4 CPUs per RabbitMQ node the minimum?
31
+
32
+
### Are 4 CPUs per RabbitMQ node a minimum requirement for production?
32
33
33
34
No. The absolute minimum is 2 CPUs.
34
35
35
-
For our workload - 1 billion messages per day at 8kB payload and a replication factor of three - 4 CPUs is the minimum.
36
+
Our workload - 1 billion persistent messages per day of 8kB payload and a replication factor of three - requires 4 CPUs.
37
+
36
38
37
39
### Will RabbitMQ work with 1 CPU?
38
40
39
41
Yes. It will work, but poorly, which is why we cannot recommend it for production workloads.
40
-
A RabbitMQ with less than 2 full CPUs cannot be considered production.
42
+
43
+
A RabbitMQ with less than 2 full CPUs cannot be considered "production".
41
44
42
45
43
46
### Can I assign less than 1 CPU to RabbitMQ?
44
47
45
48
Yes, this is entirely possible within Kubernetes.
46
49
Be prepared for unresponsiveness that cannot be explained.
47
-
The kernel will work against RabbitMQ's runtime optimisations, and anything can happen.
48
-
A RabbitMQ with less than 2 full CPUs cannot be considered production.
50
+
51
+
A RabbitMQ with less than 2 full CPUs cannot be considered "production".
52
+
49
53
50
54
### Does CPU clock speed matter for message throughput?
51
55
52
-
Yes. Queues are single threaded, and CPUs with higher clock speeds can run more cycles, which means that the queue process can perform more operations per second.
53
-
This will not the case when disks or network are the limiting factor, but in benchmarks with sufficient network and disk capacity, faster CPUs translate to higher message throughhput.
56
+
Yes. Queues are single threaded, and CPUs with higher clock speeds can run more cycles, which means that a queue process can perform more operations per second.
57
+
This will not be the case when disks or network are the limiting factor, but in benchmarks with sufficient network and disk capacity, faster CPUs usually translate to higher message throughhput.
58
+
54
59
55
60
### Are vCPUs (virtual CPUs) OK?
56
61
57
-
Yes. The workload that was used for this production configuration starting point ran on Google Cloud and used 2 real CPU cores with 2 hyper-threads each, meaning 4 vCPUs.
58
-
While we would recommend real CPUs and no hyper-threading, we also operate in the cloud and default to using vCPUs, including for our benchmarks.
62
+
Yes. The workload that was used for this production configuration ran on Google Cloud and used 2 real CPU cores with 2 hyper-threads each, meaning 4 vCPUs.
63
+
While we recommend real CPUs and no hyper-threading, we also operate in the cloud and default to using vCPUs, including for our benchmarks.
0 commit comments