Skip to content

Commit 1e5ccf4

Browse files
authored
[DOCS] Relocate shard allocation module content (#56535) (#57450)
1 parent 110922b commit 1e5ccf4

15 files changed

+89
-73
lines changed

docs/plugins/discovery-ec2.asciidoc

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -236,7 +236,8 @@ The `discovery-ec2` plugin can automatically set the `aws_availability_zone`
236236
node attribute to the availability zone of each node. This node attribute
237237
allows you to ensure that each shard has copies allocated redundantly across
238238
multiple availability zones by using the
239-
{ref}/allocation-awareness.html[Allocation Awareness] feature.
239+
{ref}/modules-cluster.html#shard-allocation-awareness[Allocation Awareness]
240+
feature.
240241

241242
In order to enable the automatic definition of the `aws_availability_zone`
242243
attribute, set `cloud.node.auto_attributes` to `true`. For example:
@@ -327,8 +328,9 @@ labelled as `Moderate` or `Low`.
327328

328329
* It is a good idea to distribute your nodes across multiple
329330
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html[availability
330-
zones] and use {ref}/allocation-awareness.html[shard allocation awareness] to
331-
ensure that each shard has copies in more than one availability zone.
331+
zones] and use {ref}/modules-cluster.html#shard-allocation-awareness[shard
332+
allocation awareness] to ensure that each shard has copies in more than one
333+
availability zone.
332334

333335
* Do not span a cluster across regions. {es} expects that node-to-node
334336
connections within a cluster are reasonably reliable and offer high bandwidth

docs/reference/index-modules.asciidoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ specific index module:
105105
for the upper bound (e.g. `0-all`). Defaults to `false` (i.e. disabled).
106106
Note that the auto-expanded number of replicas only takes
107107
<<shard-allocation-filtering,allocation filtering>> rules into account, but ignores
108-
any other allocation rules such as <<allocation-awareness,shard allocation awareness>>
108+
any other allocation rules such as <<shard-allocation-awareness,shard allocation awareness>>
109109
and <<allocation-total-shards,total shards per node>>, and this can lead to the
110110
cluster health becoming `YELLOW` if the applicable rules prevent all the replicas
111111
from being allocated.
@@ -180,7 +180,7 @@ specific index module:
180180
`index.blocks.read_only_allow_delete`::
181181

182182
Similar to `index.blocks.read_only`, but also allows deleting the index to
183-
make more resources available. The <<disk-allocator,disk-based shard
183+
make more resources available. The <<shard-allocation-awareness,disk-based shard
184184
allocator>> may add and remove this block automatically.
185185

186186
Deleting documents from an index to release resources - rather than deleting the index itself - can increase the index size over time. When `index.blocks.read_only_allow_delete` is set to `true`, deleting documents is not permitted. However, deleting the index itself releases the read-only index block and makes resources available almost immediately.

docs/reference/index-modules/allocation/filtering.asciidoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,8 @@
33

44
You can use shard allocation filters to control where {es} allocates shards of
55
a particular index. These per-index filters are applied in conjunction with
6-
<<allocation-filtering, cluster-wide allocation filtering>> and
7-
<<allocation-awareness, allocation awareness>>.
6+
<<cluster-shard-allocation-filtering, cluster-wide allocation filtering>> and
7+
<<shard-allocation-awareness, allocation awareness>>.
88

99
Shard allocation filters can be based on custom node attributes or the built-in
1010
`_name`, `_host_ip`, `_publish_ip`, `_ip`, `_host` and `_id` attributes.

docs/reference/modules.asciidoc

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -21,13 +21,7 @@ The modules in this section are:
2121
<<modules-discovery,Discovery and cluster formation>>::
2222

2323
How nodes discover each other, elect a master and form a cluster.
24-
25-
<<modules-cluster,Shard allocation and cluster-level routing>>::
26-
27-
Settings to control where, when, and how shards are allocated to nodes.
2824
--
2925

3026

31-
include::modules/discovery.asciidoc[]
32-
33-
include::modules/cluster.asciidoc[]
27+
include::modules/discovery.asciidoc[]

docs/reference/modules/cluster.asciidoc

Lines changed: 14 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,27 +1,31 @@
11
[[modules-cluster]]
2-
== Shard allocation and cluster-level routing
2+
=== Cluster-level shard allocation and routing settings
3+
4+
_Shard allocation_ is the process of allocating shards to nodes. This can
5+
happen during initial recovery, replica allocation, rebalancing, or
6+
when nodes are added or removed.
37

48
One of the main roles of the master is to decide which shards to allocate to
59
which nodes, and when to move shards between nodes in order to rebalance the
610
cluster.
711

812
There are a number of settings available to control the shard allocation process:
913

10-
* <<shards-allocation>> lists the settings to control the allocation and
14+
* <<cluster-shard-allocation-settings>> control allocation and
1115
rebalancing operations.
1216

13-
* <<disk-allocator>> explains how Elasticsearch takes available disk space
14-
into account, and the related settings.
17+
* <<disk-based-shard-allocation>> explains how Elasticsearch takes available
18+
disk space into account, and the related settings.
1519

16-
* <<allocation-awareness>> and <<forced-awareness>> control how shards can
17-
be distributed across different racks or availability zones.
20+
* <<shard-allocation-awareness>> and <<forced-awareness>> control how shards
21+
can be distributed across different racks or availability zones.
1822

19-
* <<allocation-filtering>> allows certain nodes or groups of nodes excluded
20-
from allocation so that they can be decommissioned.
23+
* <<cluster-shard-allocation-filtering>> allows certain nodes or groups of
24+
nodes excluded from allocation so that they can be decommissioned.
2125

22-
Besides these, there are a few other <<misc-cluster,miscellaneous cluster-level settings>>.
26+
Besides these, there are a few other <<misc-cluster-settings,miscellaneous cluster-level settings>>.
2327

24-
All of the settings in this section are _dynamic_ settings which can be
28+
All of these settings are _dynamic_ and can be
2529
updated on a live cluster with the
2630
<<cluster-update-settings,cluster-update-settings>> API.
2731

docs/reference/modules/cluster/allocation_awareness.asciidoc

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
[[allocation-awareness]]
2-
=== Shard allocation awareness
1+
[[shard-allocation-awareness]]
2+
==== Shard allocation awareness
33

44
You can use custom node attributes as _awareness attributes_ to enable {es}
55
to take your physical hardware configuration into account when allocating shards.
@@ -29,9 +29,8 @@ allocated in each location. If the number of nodes in each location is
2929
unbalanced and there are a lot of replicas, replica shards might be left
3030
unassigned.
3131

32-
[float]
3332
[[enabling-awareness]]
34-
==== Enabling shard allocation awareness
33+
===== Enabling shard allocation awareness
3534

3635
To enable shard allocation awareness:
3736

@@ -83,9 +82,8 @@ allocates the lost shard copies to nodes in `rack_one`. To prevent multiple
8382
copies of a particular shard from being allocated in the same location, you can
8483
enable forced awareness.
8584

86-
[float]
8785
[[forced-awareness]]
88-
==== Forced awareness
86+
===== Forced awareness
8987

9088
By default, if one location fails, Elasticsearch assigns all of the missing
9189
replica shards to the remaining locations. While you might have sufficient

docs/reference/modules/cluster/allocation_filtering.asciidoc

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
1-
[[allocation-filtering]]
2-
=== Cluster-level shard allocation filtering
1+
[[cluster-shard-allocation-filtering]]
2+
==== Cluster-level shard allocation filtering
33

44
You can use cluster-level shard allocation filters to control where {es}
55
allocates shards from any index. These cluster wide filters are applied in
66
conjunction with <<shard-allocation-filtering, per-index allocation filtering>>
7-
and <<allocation-awareness, allocation awareness>>.
7+
and <<shard-allocation-awareness, allocation awareness>>.
88

99
Shard allocation filters can be based on custom node attributes or the built-in
1010
`_name`, `_host_ip`, `_publish_ip`, `_ip`, `_host` and `_id` attributes.
@@ -28,9 +28,8 @@ PUT _cluster/settings
2828
}
2929
--------------------------------------------------
3030

31-
[float]
3231
[[cluster-routing-settings]]
33-
==== Cluster routing settings
32+
===== Cluster routing settings
3433

3534
`cluster.routing.allocation.include.{attribute}`::
3635

docs/reference/modules/cluster/disk_allocator.asciidoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
[[disk-allocator]]
2-
=== Disk-based shard allocation
1+
[[disk-based-shard-allocation]]
2+
==== Disk-based shard allocation settings
33

44
{es} considers the available disk space on a node before deciding
55
whether to allocate new shards to that node or to actively relocate shards away

docs/reference/modules/cluster/misc.asciidoc

Lines changed: 9 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
[[misc-cluster]]
2-
=== Miscellaneous cluster settings
1+
[[misc-cluster-settings]]
2+
==== Miscellaneous cluster settings
33

44
[[cluster-read-only]]
5-
==== Metadata
5+
===== Metadata
66

77
An entire cluster may be set to read-only with the following _dynamic_ setting:
88

@@ -23,8 +23,7 @@ API can make the cluster read-write again.
2323

2424

2525
[[cluster-shard-limit]]
26-
27-
==== Cluster Shard Limit
26+
===== Cluster shard limit
2827

2928
There is a soft limit on the number of shards in a cluster, based on the number
3029
of nodes in the cluster. This is intended to prevent operations which may
@@ -66,7 +65,7 @@ This allows the creation of indices during cluster creation if dedicated master
6665
nodes are set up before data nodes.
6766

6867
[[user-defined-data]]
69-
==== User Defined Cluster Metadata
68+
===== User-defined cluster metadata
7069

7170
User-defined metadata can be stored and retrieved using the Cluster Settings API.
7271
This can be used to store arbitrary, infrequently-changing data about the cluster
@@ -92,7 +91,7 @@ metadata will be viewable by anyone with access to the
9291
{es} logs.
9392

9493
[[cluster-max-tombstones]]
95-
==== Index Tombstones
94+
===== Index tombstones
9695

9796
The cluster state maintains index tombstones to explicitly denote indices that
9897
have been deleted. The number of tombstones maintained in the cluster state is
@@ -109,7 +108,7 @@ than 500 deletes. We think that is rare, thus the default. Tombstones don't take
109108
up much space, but we also think that a number like 50,000 is probably too big.
110109

111110
[[cluster-logger]]
112-
==== Logger
111+
===== Logger
113112

114113
The settings which control logging can be updated dynamically with the
115114
`logger.` prefix. For instance, to increase the logging level of the
@@ -127,10 +126,10 @@ PUT /_cluster/settings
127126

128127

129128
[[persistent-tasks-allocation]]
130-
==== Persistent Tasks Allocations
129+
===== Persistent tasks allocation
131130

132131
Plugins can create a kind of tasks called persistent tasks. Those tasks are
133-
usually long-live tasks and are stored in the cluster state, allowing the
132+
usually long-lived tasks and are stored in the cluster state, allowing the
134133
tasks to be revived after a full cluster restart.
135134

136135
Every time a persistent task is created, the master node takes care of

docs/reference/modules/cluster/shards_allocation.asciidoc

Lines changed: 9 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,9 @@
1-
[[shards-allocation]]
2-
=== Cluster level shard allocation
3-
4-
Shard allocation is the process of allocating shards to nodes. This can
5-
happen during initial recovery, replica allocation, rebalancing, or
6-
when nodes are added or removed.
7-
8-
[float]
9-
=== Shard allocation settings
1+
[[cluster-shard-allocation-settings]]
2+
==== Cluster-level shard allocation settings
103

114
The following _dynamic_ settings may be used to control shard allocation and recovery:
125

6+
[[cluster-routing-allocation-enable]]
137
`cluster.routing.allocation.enable`::
148
+
159
--
@@ -58,8 +52,8 @@ one of the active allocation ids in the cluster state.
5852
Defaults to `false`, meaning that no check is performed by default. This
5953
setting only applies if multiple nodes are started on the same machine.
6054

61-
[float]
62-
=== Shard rebalancing settings
55+
[[shards-rebalancing-settings]]
56+
==== Shard rebalancing settings
6357

6458
The following _dynamic_ settings may be used to control the rebalancing of
6559
shards across the cluster:
@@ -94,11 +88,11 @@ Specify when shard rebalancing is allowed:
9488
allowed cluster wide. Defaults to `2`. Note that this setting
9589
only controls the number of concurrent shard relocations due
9690
to imbalances in the cluster. This setting does not limit shard
97-
relocations due to <<allocation-filtering,allocation filtering>>
98-
or <<forced-awareness,forced awareness>>.
91+
relocations due to <<cluster-shard-allocation-filtering,allocation
92+
filtering>> or <<forced-awareness,forced awareness>>.
9993

100-
[float]
101-
=== Shard balancing heuristics
94+
[[shards-rebalancing-heuristics]]
95+
==== Shard balancing heuristics settings
10296

10397
The following settings are used together to determine where to place each
10498
shard. The cluster is balanced when no allowed rebalancing operation can bring the weight

0 commit comments

Comments
 (0)