Skip to content

Commit 46b2a73

Browse files
micheleRPpaulohtb6
andauthored
DOC-1521 Update memory requirements in sizing (#1223)
Co-authored-by: Paulo Borges <[email protected]>
1 parent 96bb269 commit 46b2a73

File tree

1 file changed

+18
-6
lines changed
  • modules/deploy/pages/deployment-option/self-hosted/manual

1 file changed

+18
-6
lines changed

modules/deploy/pages/deployment-option/self-hosted/manual/sizing.adoc

Lines changed: 18 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -14,12 +14,24 @@ Throughput and retention requirements can cause bottlenecks in the system. On an
1414

1515
In general, choose the number of nodes based on the following criteria, and add nodes for fault tolerance. This ensures that the system can operate with full throughput in a degraded state.
1616

17-
* Total network bandwidth must account for maximum simultaneous writes and reads, multiplied by the replication factor.
18-
* Have a minimum of 2 GB memory for each core, but more memory is better.
19-
* Have a minimum of 4 MB of memory for each topic partition replica. You can enforce this requirement in the tunable xref:reference:tunable-properties.adoc#topic_memory_per_partition[`topic_memory_per_partition` property].
20-
* Use Tiered Storage to unify historical and real-time data.
21-
* For AWS, test with either i3en, i4i, or is4gen; for GCP, test with n2-standard (GCP); and for Azure, test with lsv2. In general, choose instance types that prioritize storage and network performance.
22-
* Run hardware and Redpanda benchmark tests for a performance baseline.
17+
* **Network bandwidth**: Total bandwidth must account for maximum simultaneous writes and reads, multiplied by the replication factor.
18+
* **Memory per core**: Allocate a minimum of 2 GB memory for each CPU core. Additional memory could improve performance.
19+
* **Memory per partition**: Allocate a minimum of 2 MB of memory for each topic partition replica.
20+
+
21+
For example: If you have 10,000 total partitions with a replication factor of 3, you need at least 60 GB of memory across all brokers:
22+
+
23+
10,000 partitions × 3 replicas × 2 MB = 60,000 MB (60 GB)
24+
+
25+
If these partitions are evenly distributed across three brokers, each broker needs at least 20 GB of memory.
26+
* **Storage strategy**: Use Tiered Storage to unify historical and real-time data cost-effectively.
27+
* **Performance testing**: Run hardware and Redpanda benchmark tests to establish a performance baseline. Choose instance types that prioritize storage and network performance:
28+
+
29+
--
30+
* **AWS**: Test with i3en (NVMe SSD), i4i (NVMe), or is4gen (Intel-based NVMe) instances, or other NVMe-backed types
31+
* **Azure**: Test with Lsv2-series instances (high I/O performance) or other high-performance storage types
32+
* **GCP**: Test with n2-standard instances with local SSD or other high-performance local storage types
33+
--
34+
2335
2436
== Sizing considerations
2537

0 commit comments

Comments
 (0)