diff --git a/docs/plugins/discovery-ec2.asciidoc b/docs/plugins/discovery-ec2.asciidoc index a3b0c6812ac7f..a3190cff9224b 100644 --- a/docs/plugins/discovery-ec2.asciidoc +++ b/docs/plugins/discovery-ec2.asciidoc @@ -236,7 +236,8 @@ The `discovery-ec2` plugin can automatically set the `aws_availability_zone` node attribute to the availability zone of each node. This node attribute allows you to ensure that each shard has copies allocated redundantly across multiple availability zones by using the -{ref}/allocation-awareness.html[Allocation Awareness] feature. +{ref}/modules-cluster.html#shard-allocation-awareness[Allocation Awareness] +feature. In order to enable the automatic definition of the `aws_availability_zone` attribute, set `cloud.node.auto_attributes` to `true`. For example: @@ -327,8 +328,9 @@ labelled as `Moderate` or `Low`. * It is a good idea to distribute your nodes across multiple http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html[availability -zones] and use {ref}/allocation-awareness.html[shard allocation awareness] to -ensure that each shard has copies in more than one availability zone. +zones] and use {ref}/modules-cluster.html#shard-allocation-awareness[shard +allocation awareness] to ensure that each shard has copies in more than one +availability zone. * Do not span a cluster across regions. {es} expects that node-to-node connections within a cluster are reasonably reliable and offer high bandwidth diff --git a/docs/reference/index-modules.asciidoc b/docs/reference/index-modules.asciidoc index 56f8f9cc610a4..f9c0f7e8615ab 100644 --- a/docs/reference/index-modules.asciidoc +++ b/docs/reference/index-modules.asciidoc @@ -105,7 +105,7 @@ specific index module: for the upper bound (e.g. `0-all`). Defaults to `false` (i.e. disabled). Note that the auto-expanded number of replicas only takes <> rules into account, but ignores - any other allocation rules such as <> + any other allocation rules such as <> and <>, and this can lead to the cluster health becoming `YELLOW` if the applicable rules prevent all the replicas from being allocated. @@ -180,7 +180,7 @@ specific index module: `index.blocks.read_only_allow_delete`:: Similar to `index.blocks.read_only`, but also allows deleting the index to - make more resources available. The <> may add and remove this block automatically. Deleting documents from an index to release resources - rather than deleting the index itself - can increase the index size over time. When `index.blocks.read_only_allow_delete` is set to `true`, deleting documents is not permitted. However, deleting the index itself releases the read-only index block and makes resources available almost immediately. diff --git a/docs/reference/index-modules/allocation/filtering.asciidoc b/docs/reference/index-modules/allocation/filtering.asciidoc index f5a4ce31d38fd..12ae0e64ebaa9 100644 --- a/docs/reference/index-modules/allocation/filtering.asciidoc +++ b/docs/reference/index-modules/allocation/filtering.asciidoc @@ -3,8 +3,8 @@ You can use shard allocation filters to control where {es} allocates shards of a particular index. These per-index filters are applied in conjunction with -<> and -<>. +<> and +<>. Shard allocation filters can be based on custom node attributes or the built-in `_name`, `_host_ip`, `_publish_ip`, `_ip`, `_host` and `_id` attributes. diff --git a/docs/reference/modules.asciidoc b/docs/reference/modules.asciidoc index 2ab54762e6df2..1feafcbe3d30b 100644 --- a/docs/reference/modules.asciidoc +++ b/docs/reference/modules.asciidoc @@ -21,13 +21,7 @@ The modules in this section are: <>:: How nodes discover each other, elect a master and form a cluster. - -<>:: - - Settings to control where, when, and how shards are allocated to nodes. -- -include::modules/discovery.asciidoc[] - -include::modules/cluster.asciidoc[] +include::modules/discovery.asciidoc[] \ No newline at end of file diff --git a/docs/reference/modules/cluster.asciidoc b/docs/reference/modules/cluster.asciidoc index 810ed7c4a34b4..ba0ea765c608f 100644 --- a/docs/reference/modules/cluster.asciidoc +++ b/docs/reference/modules/cluster.asciidoc @@ -1,5 +1,9 @@ [[modules-cluster]] -== Shard allocation and cluster-level routing +=== Cluster-level shard allocation and routing settings + +_Shard allocation_ is the process of allocating shards to nodes. This can +happen during initial recovery, replica allocation, rebalancing, or +when nodes are added or removed. One of the main roles of the master is to decide which shards to allocate to which nodes, and when to move shards between nodes in order to rebalance the @@ -7,21 +11,21 @@ cluster. There are a number of settings available to control the shard allocation process: -* <> lists the settings to control the allocation and +* <> control allocation and rebalancing operations. -* <> explains how Elasticsearch takes available disk space - into account, and the related settings. +* <> explains how Elasticsearch takes available + disk space into account, and the related settings. -* <> and <> control how shards can - be distributed across different racks or availability zones. +* <> and <> control how shards + can be distributed across different racks or availability zones. -* <> allows certain nodes or groups of nodes excluded - from allocation so that they can be decommissioned. +* <> allows certain nodes or groups of + nodes excluded from allocation so that they can be decommissioned. -Besides these, there are a few other <>. +Besides these, there are a few other <>. -All of the settings in this section are _dynamic_ settings which can be +All of these settings are _dynamic_ and can be updated on a live cluster with the <> API. diff --git a/docs/reference/modules/cluster/allocation_awareness.asciidoc b/docs/reference/modules/cluster/allocation_awareness.asciidoc index 1643a5f9917f8..c03e7a9c6e500 100644 --- a/docs/reference/modules/cluster/allocation_awareness.asciidoc +++ b/docs/reference/modules/cluster/allocation_awareness.asciidoc @@ -1,5 +1,5 @@ -[[allocation-awareness]] -=== Shard allocation awareness +[[shard-allocation-awareness]] +==== Shard allocation awareness You can use custom node attributes as _awareness attributes_ to enable {es} to take your physical hardware configuration into account when allocating shards. @@ -29,9 +29,8 @@ allocated in each location. If the number of nodes in each location is unbalanced and there are a lot of replicas, replica shards might be left unassigned. -[float] [[enabling-awareness]] -==== Enabling shard allocation awareness +===== Enabling shard allocation awareness To enable shard allocation awareness: @@ -83,9 +82,8 @@ allocates the lost shard copies to nodes in `rack_one`. To prevent multiple copies of a particular shard from being allocated in the same location, you can enable forced awareness. -[float] [[forced-awareness]] -==== Forced awareness +===== Forced awareness By default, if one location fails, Elasticsearch assigns all of the missing replica shards to the remaining locations. While you might have sufficient diff --git a/docs/reference/modules/cluster/allocation_filtering.asciidoc b/docs/reference/modules/cluster/allocation_filtering.asciidoc index 51a66a0e4cf0d..a7ca63d70c695 100644 --- a/docs/reference/modules/cluster/allocation_filtering.asciidoc +++ b/docs/reference/modules/cluster/allocation_filtering.asciidoc @@ -1,10 +1,10 @@ -[[allocation-filtering]] -=== Cluster-level shard allocation filtering +[[cluster-shard-allocation-filtering]] +==== Cluster-level shard allocation filtering You can use cluster-level shard allocation filters to control where {es} allocates shards from any index. These cluster wide filters are applied in conjunction with <> -and <>. +and <>. Shard allocation filters can be based on custom node attributes or the built-in `_name`, `_host_ip`, `_publish_ip`, `_ip`, `_host` and `_id` attributes. @@ -28,9 +28,8 @@ PUT _cluster/settings } -------------------------------------------------- -[float] [[cluster-routing-settings]] -==== Cluster routing settings +===== Cluster routing settings `cluster.routing.allocation.include.{attribute}`:: diff --git a/docs/reference/modules/cluster/disk_allocator.asciidoc b/docs/reference/modules/cluster/disk_allocator.asciidoc index 3c327258b9143..b11a0d1637d11 100644 --- a/docs/reference/modules/cluster/disk_allocator.asciidoc +++ b/docs/reference/modules/cluster/disk_allocator.asciidoc @@ -1,5 +1,5 @@ -[[disk-allocator]] -=== Disk-based shard allocation +[[disk-based-shard-allocation]] +==== Disk-based shard allocation settings {es} considers the available disk space on a node before deciding whether to allocate new shards to that node or to actively relocate shards away diff --git a/docs/reference/modules/cluster/misc.asciidoc b/docs/reference/modules/cluster/misc.asciidoc index 32803bf12bc31..6986254fa1c6b 100644 --- a/docs/reference/modules/cluster/misc.asciidoc +++ b/docs/reference/modules/cluster/misc.asciidoc @@ -1,8 +1,8 @@ -[[misc-cluster]] -=== Miscellaneous cluster settings +[[misc-cluster-settings]] +==== Miscellaneous cluster settings [[cluster-read-only]] -==== Metadata +===== Metadata An entire cluster may be set to read-only with the following _dynamic_ setting: @@ -23,8 +23,7 @@ API can make the cluster read-write again. [[cluster-shard-limit]] - -==== Cluster Shard Limit +===== Cluster shard limit There is a soft limit on the number of shards in a cluster, based on the number of nodes in the cluster. This is intended to prevent operations which may @@ -66,7 +65,7 @@ This allows the creation of indices during cluster creation if dedicated master nodes are set up before data nodes. [[user-defined-data]] -==== User Defined Cluster Metadata +===== User-defined cluster metadata User-defined metadata can be stored and retrieved using the Cluster Settings API. This can be used to store arbitrary, infrequently-changing data about the cluster @@ -92,7 +91,7 @@ metadata will be viewable by anyone with access to the {es} logs. [[cluster-max-tombstones]] -==== Index Tombstones +===== Index tombstones The cluster state maintains index tombstones to explicitly denote indices that have been deleted. The number of tombstones maintained in the cluster state is @@ -109,7 +108,7 @@ than 500 deletes. We think that is rare, thus the default. Tombstones don't take up much space, but we also think that a number like 50,000 is probably too big. [[cluster-logger]] -==== Logger +===== Logger The settings which control logging can be updated dynamically with the `logger.` prefix. For instance, to increase the logging level of the @@ -127,10 +126,10 @@ PUT /_cluster/settings [[persistent-tasks-allocation]] -==== Persistent Tasks Allocations +===== Persistent tasks allocation Plugins can create a kind of tasks called persistent tasks. Those tasks are -usually long-live tasks and are stored in the cluster state, allowing the +usually long-lived tasks and are stored in the cluster state, allowing the tasks to be revived after a full cluster restart. Every time a persistent task is created, the master node takes care of diff --git a/docs/reference/modules/cluster/shards_allocation.asciidoc b/docs/reference/modules/cluster/shards_allocation.asciidoc index 7513142cb86ae..1ea20dbd2e114 100644 --- a/docs/reference/modules/cluster/shards_allocation.asciidoc +++ b/docs/reference/modules/cluster/shards_allocation.asciidoc @@ -1,15 +1,9 @@ -[[shards-allocation]] -=== Cluster level shard allocation - -Shard allocation is the process of allocating shards to nodes. This can -happen during initial recovery, replica allocation, rebalancing, or -when nodes are added or removed. - -[float] -=== Shard allocation settings +[[cluster-shard-allocation-settings]] +==== Cluster-level shard allocation settings The following _dynamic_ settings may be used to control shard allocation and recovery: +[[cluster-routing-allocation-enable]] `cluster.routing.allocation.enable`:: + -- @@ -58,8 +52,8 @@ one of the active allocation ids in the cluster state. Defaults to `false`, meaning that no check is performed by default. This setting only applies if multiple nodes are started on the same machine. -[float] -=== Shard rebalancing settings +[[shards-rebalancing-settings]] +==== Shard rebalancing settings The following _dynamic_ settings may be used to control the rebalancing of shards across the cluster: @@ -94,11 +88,11 @@ Specify when shard rebalancing is allowed: allowed cluster wide. Defaults to `2`. Note that this setting only controls the number of concurrent shard relocations due to imbalances in the cluster. This setting does not limit shard - relocations due to <> - or <>. + relocations due to <> or <>. -[float] -=== Shard balancing heuristics +[[shards-rebalancing-heuristics]] +==== Shard balancing heuristics settings The following settings are used together to determine where to place each shard. The cluster is balanced when no allowed rebalancing operation can bring the weight diff --git a/docs/reference/monitoring/exporters.asciidoc b/docs/reference/monitoring/exporters.asciidoc index e1a27641b6e75..e64997b1a8a75 100644 --- a/docs/reference/monitoring/exporters.asciidoc +++ b/docs/reference/monitoring/exporters.asciidoc @@ -74,8 +74,7 @@ feature is triggered, it makes all indices (including monitoring indices) read-only until the issue is fixed and a user manually makes the index writeable again. While an active monitoring index is read-only, it will naturally fail to write (index) new data and will continuously log errors that indicate the write -failure. For more information, see -{ref}/disk-allocator.html[Disk-based Shard Allocation]. +failure. For more information, see <>. [float] [[es-monitoring-default-exporter]] diff --git a/docs/reference/redirects.asciidoc b/docs/reference/redirects.asciidoc index 94a36a7f8a551..5070903b4d8ca 100644 --- a/docs/reference/redirects.asciidoc +++ b/docs/reference/redirects.asciidoc @@ -727,6 +727,31 @@ See <>. See <>. +[role="exclude",id="shards-allocation"] +=== Cluster-level shard allocation + +See <>. + +[role="exclude",id="disk-allocator"] +=== Disk-based shard allocation + +See <>. + +[role="exclude",id="allocation-awareness"] +=== Shard allocation awareness + +See <>. + +[role="exclude",id="allocation-filtering"] +=== Cluster-level shard allocation filtering + +See <>. + +[role="exclude",id="misc-cluster"] +=== Miscellaneous cluster settings + +See <>. + [role="exclude",id="_timing"] === Timing diff --git a/docs/reference/search/request/preference.asciidoc b/docs/reference/search/request/preference.asciidoc index 8462748de4c5a..7c64bf8d2ce19 100644 --- a/docs/reference/search/request/preference.asciidoc +++ b/docs/reference/search/request/preference.asciidoc @@ -3,7 +3,7 @@ Controls a `preference` of the shard copies on which to execute the search. By default, Elasticsearch selects from the available shard copies in an -unspecified order, taking the <> and +unspecified order, taking the <> and <> configuration into account. However, it may sometimes be desirable to try and route certain searches to certain sets of shard copies. diff --git a/docs/reference/setup.asciidoc b/docs/reference/setup.asciidoc index ac712ec449ca3..87ed8a01c5ca4 100644 --- a/docs/reference/setup.asciidoc +++ b/docs/reference/setup.asciidoc @@ -45,13 +45,13 @@ include::setup/jvm-options.asciidoc[] include::setup/secure-settings.asciidoc[] -include::settings/ccr-settings.asciidoc[] +include::settings/audit-settings.asciidoc[] include::modules/indices/circuit_breaker.asciidoc[] -include::modules/indices/recovery.asciidoc[] +include::modules/cluster.asciidoc[] -include::modules/indices/indexing_buffer.asciidoc[] +include::settings/ccr-settings.asciidoc[] include::modules/indices/fielddata.asciidoc[] @@ -59,6 +59,10 @@ include::modules/http.asciidoc[] include::settings/ilm-settings.asciidoc[] +include::modules/indices/recovery.asciidoc[] + +include::modules/indices/indexing_buffer.asciidoc[] + include::settings/license-settings.asciidoc[] include::modules/gateway.asciidoc[] @@ -75,13 +79,11 @@ include::modules/network.asciidoc[] include::modules/indices/query_cache.asciidoc[] -include::modules/indices/request_cache.asciidoc[] - include::modules/indices/search-settings.asciidoc[] include::settings/security-settings.asciidoc[] -include::settings/audit-settings.asciidoc[] +include::modules/indices/request_cache.asciidoc[] include::settings/slm-settings.asciidoc[] diff --git a/docs/reference/upgrade/disable-shard-alloc.asciidoc b/docs/reference/upgrade/disable-shard-alloc.asciidoc index 8f238a2c2c6a5..56461fa999720 100644 --- a/docs/reference/upgrade/disable-shard-alloc.asciidoc +++ b/docs/reference/upgrade/disable-shard-alloc.asciidoc @@ -4,8 +4,8 @@ When you shut down a node, the allocation process waits for starting to replicate the shards on that node to other nodes in the cluster, which can involve a lot of I/O. Since the node is shortly going to be restarted, this I/O is unnecessary. You can avoid racing the clock by -<> of replicas before shutting down -the node: +<> of replicas before +shutting down the node: [source,console] --------------------------------------------------