Skip to content

Commit 8d10fe5

Browse files
committed
Avoid distributed block storage
Fixes DOC-15345
1 parent 2432d8a commit 8d10fe5

File tree

1 file changed

+2
-0
lines changed

1 file changed

+2
-0
lines changed

src/current/v25.4/recommended-production-settings.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -152,6 +152,8 @@ We recommend provisioning volumes with {% include {{ page.version.version }}/pro
152152

153153
This is especially recommended if you are using local disks rather than a cloud provider's network-attached disks that are often replicated under the hood, because local disks have a greater risk of failure. You can do this for the [entire cluster]({% link {{ page.version.version }}/configure-replication-zones.md %}#edit-the-default-replication-zone) or for specific [databases]({% link {{ page.version.version }}/configure-replication-zones.md %}#create-a-replication-zone-for-a-database), [tables]({% link {{ page.version.version }}/configure-replication-zones.md %}#create-a-replication-zone-for-a-table), or [rows]({% link {{ page.version.version }}/configure-replication-zones.md %}#create-a-replication-zone-for-a-partition).
154154

155+
- Avoid distributed storage systems. This includes distributed block storage and file systems such as Ceph, GlusterFS, DRBD, and SAN-style solutions. CockroachDB is already a distributed, replicated storage system. It manages [data distribution]({% link {{ page.version.version }}/architecture/distribution-layer.md %}), [replication and rebalancing]({% link {{ page.version.version }}/architecture/replication-layer.md %}), and fault tolerance using [Raft]({% link {{ page.version.version }}/architecture/replication-layer.md %}#raft). Putting CockroachDB's data directory on a distributed storage system adds a second, separate layer that also does replication and failure handling. These layers do not coordinate with each other. This can cause problems such as: duplicate replication and extra writes; higher and more variable latency due to network hops in the I/O path; conflicting recovery behavior when something fails; and more complex, harder-to-debug failures in general.
156+
155157
{{site.data.alerts.callout_info}}
156158
Under-provisioning storage leads to node crashes when the disks fill up. Once this has happened, it is difficult to recover from. To prevent your disks from filling up, provision enough storage for your workload, monitor your disk usage, and use a [ballast file]({% link {{ page.version.version }}/cluster-setup-troubleshooting.md %}#automatic-ballast-files). For more information, see [capacity planning issues]({% link {{ page.version.version }}/cluster-setup-troubleshooting.md %}#capacity-planning-issues) and [storage issues]({% link {{ page.version.version }}/cluster-setup-troubleshooting.md %}#storage-issues).
157159
{{site.data.alerts.end}}

0 commit comments

Comments
 (0)