Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 0 additions & 12 deletions docs/reference/administering.asciidoc

This file was deleted.

4 changes: 2 additions & 2 deletions docs/reference/ccr/auto-follow.asciidoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[role="xpack"]
[testenv="platinum"]
[[ccr-auto-follow]]
=== Automatically following indices
== Automatically following indices

In time series use cases where you want to follow new indices that are
periodically created (such as daily Beats indices), manually configuring follower
Expand All @@ -10,7 +10,7 @@ functionality in {ccr} is aimed at easing this burden. With the auto-follow
functionality, you can specify that new indices in a remote cluster that have a
name that matches a pattern are automatically followed.

==== Managing auto-follow patterns
=== Managing auto-follow patterns

You can add a new auto-follow pattern configuration with the
{ref}/ccr-put-auto-follow-pattern.html[create auto-follow pattern API]. When you create
Expand Down
4 changes: 2 additions & 2 deletions docs/reference/ccr/getting-started.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ This getting-started guide for {ccr} shows you how to:

. Obtain a license that includes the {ccr} features. See
https://www.elastic.co/subscriptions[subscriptions] and
<<license-management>>.
{stack-ov}/license-management.html[License-management].

. If the Elastic {security-features} are enabled in your local and remote
clusters, you need a user that has appropriate authority to perform the steps
Expand All @@ -34,7 +34,7 @@ to control which users have authority to manage {ccr}.
By default, you can perform all of the steps in this tutorial by
using the built-in `elastic` user. However, a password must be set for this user
before the user can do anything. For information about how to set that password,
see <<security-getting-started>>.
see {stack-ov}/security-getting-started.html[Tutorial: Getting started with security].

If you are performing these steps in a production environment, take extra care
because the `elastic` user has the `superuser` role and you could inadvertently
Expand Down
4 changes: 0 additions & 4 deletions docs/reference/ccr/index.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,6 @@
[[xpack-ccr]]
= {ccr-cap}

[partintro]
--

The {ccr} (CCR) feature enables replication of indices in remote clusters to a
local cluster. This functionality can be used in some common production use
cases:
Expand All @@ -22,7 +19,6 @@ This guide provides an overview of {ccr}:
* <<ccr-getting-started>>
* <<ccr-upgrading>>

--

include::overview.asciidoc[]
include::requirements.asciidoc[]
Expand Down
8 changes: 4 additions & 4 deletions docs/reference/ccr/requirements.asciidoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[role="xpack"]
[testenv="platinum"]
[[ccr-requirements]]
=== Requirements for leader indices
== Requirements for leader indices

{ccr-cap} works by replaying the history of individual write
operations that were performed on the shards of the leader index. This means that the
Expand All @@ -24,7 +24,7 @@ enabled.

[float]
[[ccr-overview-soft-deletes]]
==== Soft delete settings
=== Soft delete settings

`index.soft_deletes.enabled`::

Expand All @@ -44,7 +44,7 @@ For more information about index settings, see {ref}/index-modules.html[Index mo

[float]
[[ccr-overview-beats]]
==== Setting soft deletes on indices created by APM Server or Beats
=== Setting soft deletes on indices created by APM Server or Beats

If you want to replicate indices created by APM Server or Beats, and are
allowing APM Server or Beats to manage index templates, you need to configure
Expand All @@ -65,7 +65,7 @@ index template.

[float]
[[ccr-overview-logstash]]
==== Setting soft deletes on indices created by Logstash
=== Setting soft deletes on indices created by Logstash

If you want to replicate indices created by Logstash, and are using Logstash to
manage index templates, you need to configure soft deletes on a custom Logstash
Expand Down
35 changes: 35 additions & 0 deletions docs/reference/high-availability.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
[[high-availability]]
= Set up a cluster for high availability

[partintro]
--
As with any software that stores data,
it is important to routinely back up your data.
{es}'s <<glossary-replica-shard,replica shards>> provide high availability
during runtime;
they enable you to tolerate sporadic node loss
without an interruption of service.

However, replica shards do not protect an {es} cluster
from catastrophic failure.
You need a backup of your cluster—
a copy in case something goes wrong.


{es} offers two features to support high availability for a cluster:

* <<backup-cluster,Snapshot and restore>>,
which you can use to back up individual indices or entire clusters.
You can automatically store these backups in a repository on a shared filesystem.

* <<xpack-ccr,Cross-cluster replication (CCR)>>,
which you can use to copy indices in remote clusters to a local cluster.
You can use {ccr} to recover from the failure of a primary cluster
or serve data locally based on geo-proximity.
--

include::high-availability/backup-cluster.asciidoc[]

:leveloffset: +1
include::ccr/index.asciidoc[]
:leveloffset: -1
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,6 @@
<titleabbrev>Back up the data</titleabbrev>
++++

As with any software that stores data, it is important to routinely back up your
data. {es} replicas provide high availability during runtime; they enable you to
tolerate sporadic node loss without an interruption of service.

Replicas do not provide protection from catastrophic failure, however. For that,
you need a real backup of your cluster—a complete copy in case something goes
wrong.

To back up your cluster's data, you can use the <<modules-snapshots,snapshot API>>.

include::{es-repo-dir}/modules/snapshots.asciidoc[tag=snapshot-intro]
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/index.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ include::monitoring/index.asciidoc[]

include::frozen-indices.asciidoc[]

include::administering.asciidoc[]
include::high-availability.asciidoc[]

include::data-rollup-transform.asciidoc[]

Expand Down
3 changes: 3 additions & 0 deletions docs/reference/redirects.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -721,3 +721,6 @@ See <<monitoring-overview>>.

See <<monitor-elasticsearch-cluster>>.

[role="exclude",id="administer-elasticsearch"]
=== Administering {es}
See <<high-availability>>.