diff --git a/.openpublishing.redirection.json b/.openpublishing.redirection.json
index ea52697e48e..bb65e9511a1 100644
--- a/.openpublishing.redirection.json
+++ b/.openpublishing.redirection.json
@@ -11,9 +11,45 @@
"redirect_document_id": false
},
{
- "source_path": "docs/reference-archtiectures/managed-web-app/index.md",
- "redirect_url": "/azure/architecture/reference-archtiectures/app-service-web-app",
+ "source_path": "docs/reference-architectures/managed-web-app/index.md",
+ "redirect_url": "/azure/architecture/reference-architectures/app-service-web-app",
"redirect_document_id": false
- }
+ },
+ {
+ "source_path": "docs/elasticsearch/automated-performance-tests.md",
+ "redirect_url": "/azure/architecture"
+ },
+ {
+ "source_path": "docs/elasticsearch/automated-resilience-tests.md",
+ "redirect_url": "/azure/architecture"
+ },
+ {
+ "source_path": "docs/elasticsearch/data-aggregation-and-query-performance.md",
+ "redirect_url": "/azure/architecture"
+ },
+ {
+ "source_path": "docs/elasticsearch/data-ingestion-performance.md",
+ "redirect_url": "/azure/architecture"
+ },
+ {
+ "source_path": "docs/elasticsearch/index.md",
+ "redirect_url": "/azure/architecture"
+ },
+ {
+ "source_path": "docs/elasticsearch/jmeter-junit-sampler.md",
+ "redirect_url": "/azure/architecture"
+ },
+ {
+ "source_path": "docs/elasticsearch/jmeter-test-plan.md",
+ "redirect_url": "/azure/architecture"
+ },
+ {
+ "source_path": "docs/elasticsearch/performance-testing-environment.md",
+ "redirect_url": "/azure/architecture"
+ },
+ {
+ "source_path": "docs/elasticsearch/resilience-and-recover.md",
+ "redirect_url": "/azure/architecture"
+ }
]
}
\ No newline at end of file
diff --git a/docs/_bread/toc.yml b/docs/_bread/toc.yml
index 438b45091ef..7e03d4a8bec 100644
--- a/docs/_bread/toc.yml
+++ b/docs/_bread/toc.yml
@@ -18,9 +18,6 @@
- name: Design Review Checklists
tocHref: /azure/architecture/checklist
topicHref: /azure/architecture/checklist
- - name: Elasticsearch
- tocHref: /azure/architecture/elasticsearch
- topicHref: /azure/architecture/elasticsearch
- name: Manage Identity in Multitenant Applications
tocHref: /azure/architecture/multitenant-identity
tocHref: /azure/architecture/multitenant-identity
diff --git a/docs/_images/elasticsearch.svg b/docs/_images/elasticsearch.svg
deleted file mode 100644
index f8bf2d76375..00000000000
--- a/docs/_images/elasticsearch.svg
+++ /dev/null
@@ -1,12 +0,0 @@
-
-
\ No newline at end of file
diff --git a/docs/elasticsearch/automated-performance-tests.md b/docs/elasticsearch/automated-performance-tests.md
deleted file mode 100644
index 78cc6f2d2da..00000000000
--- a/docs/elasticsearch/automated-performance-tests.md
+++ /dev/null
@@ -1,104 +0,0 @@
----
-title: Run the automated Elasticsearch performance tests
-description: Description of how you can run the performance tests in your own environment.
-author: dragon119
-ms.date: 09/22/2016
-ms.custom: elasticsearch
-
-pnp.series.title: Elasticsearch on Azure
-pnp.series.next: automated-resilience-tests
-pnp.series.prev: automated-resilience-tests
----
-# Run automated performance tests
-[!INCLUDE [header](../_includes/header.md)]
-
-The documents [Tuning data ingestion performance for Elasticsearch on Azure] and [Tuning data aggregation and query performance for Elasticsearch on Azure] describe a number of performance tests that were run against a sample Elasticsearch cluster.
-
-These tests were scripted to enable them to be run in an automated manner. This document describes how you can repeat the tests in your own environment.
-
-## Prerequisites
-The automated tests require the following items:
-
-* An Elasticsearch cluster.
-* A JMeter environment setup as described by the document [Creating a performance testing environment for Elasticsearch on Azure].
-* [Python 3.5.1](https://www.python.org/downloads/release/python-351/) installed on the JMeter master VM.
-
-## How the tests work
-The tests are run using JMeter. A JMeter master server loads a test plan and passes it to a set of JMeter subordinate servers that actually run the tests. The JMeter master server coordinates the JMeter subordinate servers and accumulates the results.
-
-The following test plans are provided:
-
-* [elasticsearchautotestplan3nodes.jmx](https://github.com/mspnp/elasticsearch/blob/master/ingestion-and-query-tests/templates/elasticsearchautotestplan3nodes.jmx). Runs the ingestion test over a 3-node cluster.
-* [elasticsearchautotestplan6nodes.jmx](https://github.com/mspnp/elasticsearch/blob/master/ingestion-and-query-tests/templates/elasticsearchautotestplan6nodes.jmx). Runs the ingestion test over a 6-node cluster.
-* [elasticsearchautotestplan6qnodes.jmx](https://github.com/mspnp/elasticsearch/blob/master/ingestion-and-query-tests/templates/elasticsearchautotestplan6qnodes.jmx). Runs the ingestion and query test over a 6-node cluster.
-* [elasticsearchautotestplan6nodesqueryonly.jmx](https://github.com/mspnp/elasticsearch/blob/master/ingestion-and-query-tests/templates/elasticsearchautotestplan6nodesqueryonly.jmx). Runs the query-only test over a 6-node cluster.
-
-You can use these test plans as a basis for your own scenarios if you need fewer or more nodes.
-
-The test plans use a JUnit request sampler to generate and upload the test data. The JMeter test plan creates and runs this sampler, and monitors each of the Elasticsearch nodes for performance data.
-
-## Building and deploying the JUnit JAR and dependencies
-Before running the performance tests you should download, compile, and deploy the JUnit tests located under the performance/junitcode folder. These tests are referenced by the JMeter test plan. For more information, see the procedure "Importing an existing JUnit test project into Eclipse" in the document [Deploying a JMeter JUnit sampler for testing Elasticsearch performance].
-
-There are two versions of the JUnit tests:
-
-* [Elasticsearch1.73](https://github.com/mspnp/elasticsearch/tree/master/ingestion-and-query-tests/junitcode/elasticsearch1.73). Use this code for performing the ingestion tests. These tests use Elasticsearch 1.73.
-* [Elasticsearch2](https://github.com/mspnp/elasticsearch/tree/master/ingestion-and-query-tests/junitcode/elasticsearch2). Use this code for performing the query tests. These tests use Elasticsearch 2.1 and later.
-
-Copy the appropriate Java archive (JAR) file along with the rest of the dependencies to your JMeter machines. The process is described in [Deploying a JMeter JUnit sampler for testing Elasticsearch performance][Deploying a JMeter JUnit sampler for testing Elasticsearch performance].
-
-> **Important** After deploying a JUnit test, use JMeter to load and configure the test plans that reference this JUnit test and ensure that the BulkInsertLarge thread group references the correct JAR file, JUnit class name, and test method:
->
-> ![JUnit Request UI](./images/performance-tests-image1.png)
->
-> Save the updated test plans before running the tests.
->
->
-
-## Creating the test indexes
-Each test performs ingestion and/or queries against a single index specified when the test is run. You should create the index using the schemas described in the appendices to the documents [Tuning data ingestion performance for Elasticsearch on Azure] and [Tuning data aggregation and query performance for Elasticsearch on Azure] and configure them according to your test scenario (doc values enabled/disabled, multiple replicas, and so on.) Note that the test plans assume that the index contains a single type named *ctip*.
-
-## Configuring the test script parameters
-Copy the following test script parameter files to the JMeter server machine:
-
-* [run.properties](https://github.com/mspnp/elasticsearch/blob/master/ingestion-and-query-tests/run.properties). This file specifies the number of JMeter test threads to use, the duration of the test (in seconds), the IP address of a node (or a load balancer) in the Elasticsearch cluster, and the name of the cluster:
-
- ```ini
- nthreads=3
- duration=300
- elasticip=
- clustername=
- ```
-
- Edit this file and specify the appropriate values for your test and cluster.
-* [query-config-win.ini](https://github.com/mspnp/elasticsearch/blob/master/ingestion-and-query-tests/query-config-win.ini) and [query-config-nix.ini](https://github.com/mspnp/elasticsearch/blob/master/ingestion-and-query-tests/query-config-nix.ini). These two files contain the same information; the *win* file is formatted for Windows filenames and paths, and the *nix* file is formatted for Linux filenames and paths:
-
- ```ini
- [DEFAULT]
- debug=true #if true shows console logs.
-
- [RUN]
- pathreports=C:\Users\administrator1\jmeter\test-results\ #path where tests results are saved.
- jmx=C:\Users\administrator1\testplan.jmx #path to the JMeter test plan.
- machines=10.0.0.1,10.0.0.2,10.0.0.3 #IPs of the Elasticsearch data nodes separated by commas.
- reports=aggr,err,tps,waitio,cpu,network,disk,response,view #Name of the reports separated by commas.
- tests=idx1,idx2 #Elasticsearch index(es) name(s) to test, comma delimited if more than one.
- properties=run.properties #Name of the properties file.
- ```
-
- Edit this file to specify the locations of the test results, the name of the JMeter test plan to run, the IP addresses of the Elasticsearch data nodes you are collecting performance metrics from, the reports containing the raw performance data that will be generated, and the name (or names comma delimited) of the index(es) under test, if more than one, tests will run one after another. If the run.properties file is located in a different folder or directory, specify the full path to this file.
-
-## Running the tests
-* Copy the file [query-test.py](https://github.com/mspnp/elasticsearch/blob/master/ingestion-and-query-tests/query-test.py) to the JMeter server machine, in the same folder as the run.properties and query-config-win.ini (query-config-nix.ini) files.
-* Ensure that jmeter.bat (Windows) or jmeter.sh (Linux) are on the executable path for your environment.
-* Run the query-test.py script from the command line to perform the tests:
-
- ```cmd
- py query-test.py
- ```
-* When the test has completed, the results are stored as the set of comma-separated values (CSV) files specified in the query-config-win.ini (query-config-nix.ini) file . You can use Excel to analyze and graph this data.
-
-[Tuning Data Ingestion Performance for Elasticsearch on Azure]: data-ingestion-performance.md
-[Tuning Data Aggregation and Query Performance for Elasticsearch on Azure]: data-aggregation-and-query-performance.md
-[Creating a Performance Testing Environment for Elasticsearch on Azure]: performance-testing-environment.md
-[Deploying a JMeter JUnit Sampler for Testing Elasticsearch Performance]: jmeter-junit-sampler.md
diff --git a/docs/elasticsearch/automated-resilience-tests.md b/docs/elasticsearch/automated-resilience-tests.md
deleted file mode 100644
index cd3312fe560..00000000000
--- a/docs/elasticsearch/automated-resilience-tests.md
+++ /dev/null
@@ -1,156 +0,0 @@
----
-title: Run the automated Elasticsearch resiliency tests
-description: Description of how you can run the resiliency tests in your own environment.
-author: dragon119
-ms.date: 09/22/2016
-
-pnp.series.title: Elasticsearch on Azure
-pnp.series.prev: deploy-jmeter-junit-sampler
-pnp.series.next: automated-performance-tests
----
-# Run automated resiliency tests
-[!INCLUDE [header](../_includes/header.md)]
-
-In [Configuring resilience and recovery on Elasticsearch on Azure][elasticsearch-resilience-recovery], we described a series of tests that were performed against a sample Elasticsearch cluster to determine how well the system responded to some common forms of failure and how well it recovered. The tests were scripted to enable them to be run in an automated manner. This document describes how you can repeat the tests in your own environment.
-
-The following scenarios were tested:
-
-* **Node failure and restart with no data loss**. A data node is stopped and restarted after 5 minutes.
- Elasticsearch was configured not to reallocate missing shards in this interval, so no additional I/O
- is incurred in moving shards around. When the node restarts, the recovery process brings the shards
- on that node back up to date.
-* **Node failure with catastrophic data loss**. A data node is stopped and the data that it holds
- is erased to simulate catastrophic disk failure. The node is then restarted (after 5 minutes),
- effectively acting as a replacement for the original node. The recovery process requires
- rebuilding the missing data for this node, and may involve relocating shards held on other nodes.
-* **Node failure and restart with no data loss, but with shard reallocation**. A data node is
- stopped and the shards that it holds are reallocated to other nodes. The node is then restarted
- and more reallocation occurs to rebalance the cluster.
-* **Rolling updates**. Each node in the cluster is stopped and restarted after a short interval
- to simulate machines being rebooted after a software update. Only one node is stopped at any one time.
- Shards are not reallocated while a node is down.
-
-## Prerequisites
-The automated tests require the following items:
-
-* An Elasticsearch cluster.
-* A JMeter environment setup as described by the [performance testing guidance].
-* The following additions installed on the JMeter master VM only.
-
- * Java Runtime 7.
- * Nodejs 4.x.x or later.
- * The Git command line tools.
-
-## How the scripts work
-The test scripts are intended to run on the JMeter Master VM. When you select a test to run, the scripts
-perform the following sequence of operations:
-
-1. Start a JMeter test plan passing the parameters that you have specified.
-2. Copy a script that performs the operations required by the test to a specified VM in the cluster. This can be any VM that has a public IP address, or the *Jumpbox* VM if you have built the cluster using the [Azure Elasticsearch quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/elasticsearch).
-3. Run the script on the VM (or Jumpbox).
-
-The following image shows the structure of the test environment and Elasticsearch cluster. Note that the test scripts use secure shell (SSH) to connect to each node in the cluster to perform various Elasticsearch operations such as stopping or restarting a node.
-
-![Elasticsearch cluster and test environment](./images/resilience-testing1.png)
-
-## Setting up the JMeter tests
-Before running the resilience tests you should compile and deploy the JUnit tests located in the resiliency/jmeter/tests folder. These tests are referenced by the JMeter test plan. For more information, see the procedure "Importing an existing JUnit test project into Eclipse" in [Deploying a JMeter JUnit sampler for testing Elasticsearch performance][Deploying a JMeter JUnit sampler for testing Elasticsearch performance].
-
-There are two versions of the JUnit tests held in the following folders:
-
-* **Elasticsearch17.** The project in this folder generates the file Elasticsearch17.jar. Use this
- JAR for testing Elasticsearch versions 1.7.x
-* **Elasticsearch20**. The project in this folder generates the file Elasticsearch20.jar. Use this
- JAR for testing Elasticsearch version 2.0.0 and later
-
-Copy the appropriate JAR file along with the rest of the dependencies to your JMeter machines. The
-process is described by the procedure "Deploying a JUnit test to JMeter" in [Deploying a JMeter JUnit sampler for testing Elasticsearch performance].
-
-## Configuring VM security for each node
-The test scripts require an authentication certificate be installed on each Elasticsearch node
-in the cluster. This enables the scripts to run automatically without prompting for a username or
-password as they connect to the various VMs.
-
-Start by logging in to one of the nodes in the Elasticsearch cluster (or the Jumpbox VM) and then
-run the following command to generate an authentication key:
-
-```Shell
-ssh-keygen -t rsa
-```
-
-While connected to the Elasticsearch node (or Jumpbox), run the following commands for every node in
-he Elasticsearch cluster. Replace `` with the name of a valid user on each VM, and
-replace `` with the DNS name or IP address of the VM hosting the Elasticsearch node.
-Note that you will be prompted for the password of the user when running these commands.
-For more information see [SSH login without password](http://www.linuxproblem.org/art_9.html):
-
-```Shell
-ssh @ mkdir -p .ssh (
-cat .ssh/id\_rsa.pub | ssh *@ 'cat >> .ssh/authorized\_keys'
-```
-
-## Downloading and configuring the test scripts
-The test scripts are provided in a Git repository. Use the following procedure to download and
-configure the scripts.
-
-On the JMeter master machine where you will run the tests, open a Git desktop window (Git Bash)
-and clone the repository that contains the scripts, as follows:
-
-```Shell
-git clone https://github.com/mspnp/elasticsearch.git
-```
-
-Move to the resiliency-tests folder and run the following command to install the dependencies required
-to run the tests:
-
-```Shell
-npm install
-```
-
-If the JMeter master is running on Windows, download [Plink](http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html), which is a command-line interface to the PuTTY Telnet client. Copy the Plink executable to the resiliency-tests/lib folder.
-
-If the JMeter master is running on Linux, you don’t need to download Plink but you will need to
-configure password-less SSH between the JMeter master and the Elasticsearch node or Jumpbox
-you used by following the steps outlined in the procedure "Configuring VM security for each node."
-
-Edit the following configuration parameters in the `config.js` file to match your test
-environment and Elasticsearch cluster. These parameters are common to all of the tests:
-
-| Name | Description | Default Value |
-| --- | --- | --- |
-| `jmeterPath` |Local path where JMeter is located. |`C:/apache-jmeter-2.13` |
-| `resultsPath` |Relative directory where the script dumps the result. |`results` |
-| `verbose` |Indicates whether the script outputs in verbose mode or not. |`true` |
-| `remote` |Indicates whether the JMeter tests run locally or on the remote servers. |`true` |
-| `cluster.clusterName` |The name of the Elasticsearch cluster. |`elasticsearch` |
-| `cluster.jumpboxIp` |The IP address of the Jumpbox machine. |- |
-| `cluster.username` |The admin user you created while deploying the cluster. |- |
-| `cluster.password` |The password for the admin user. |- |
-| `cluster.loadBalancer.ip` |The IP address of the Elasticsearch load balancer. |- |
-| `cluster.loadBalancer.url` |Base URL of the load balancer. |- |
-
-## Running the tests
-Move to the resiliency-tests folder and run the following command:
-
-```Shell
-node app.js
-```
-
-The following menu should appear:
-
-![Test menu](./images/resilience-testing2.png)
-
-Enter the number of the scenario you want to run: `11`, `12`, `13` or `21`.
-
-Once you select a scenario, the test will run automatically. The results are stored as a set of comma-separated value (CSV) files in a folder created under the results directory. Each run has its own results folder.
-You can use Excel to analyze and graph this data.
-
-[Running Elasticsearch on Azure]: index.md
-[Tuning Data Ingestion Performance for Elasticsearch on Azure]: data-ingestion-performance.md
-[performance testing guidance]: performance-testing-environment.md
-[JMeter guidance]: implementing-jmeter.md
-[Considerations for JMeter]: jmeter-junit-sampler.md
-[Query aggregation and performance]: query-aggregation-performance.md
-[elasticsearch-resilience-recovery]: resilience-and-recovery.md
-[Resilience and Recovery Testing]: automated-resilience-tests.md
-[Deploying a JMeter JUnit Sampler for Testing Elasticsearch Performance]: jmeter-junit-sampler.md
diff --git a/docs/elasticsearch/data-aggregation-and-query-performance.md b/docs/elasticsearch/data-aggregation-and-query-performance.md
deleted file mode 100644
index 448f0868e20..00000000000
--- a/docs/elasticsearch/data-aggregation-and-query-performance.md
+++ /dev/null
@@ -1,1262 +0,0 @@
----
-title: Tune data aggregation and query performance with Elasticsearch
-description: >-
- A summary of considerations when optimizing query and search performance for
- Elasticsearch.
-author: dragon119
-ms.date: 09/22/2016
-
-pnp.series.title: Elasticsearch on Azure
-pnp.series.prev: data-ingestion-performance
-pnp.series.next: resilience-and-recovery
----
-# Tune data aggregation and query performance
-[!INCLUDE [header](../_includes/header.md)]
-
-A primary reason for using Elasticsearch is to support searches through data. Users should be able to quickly locate the information they are looking for. Additionally, the system must enable users to ask questions of the data, seek correlations, and come to conclusions that can drive business decisions. This processing is what differentiates data from information.
-
-This document summarizes options that you can consider when determining the best way to optimize your system for query and search performance.
-
-All performance recommendations depend largely on the scenarios that apply to your situation, the volume of data that you are indexing, and the rate at which applications and users query your data. You should carefully test the results of any change in configuration or indexing structure using your own data and workloads to assess the benefits to your specific scenarios. To this end, this document also describes a number of benchmarks that were performed for one specific scenario implemented using different configurations. You can adapt the approach taken to assess the performance of your own systems. The details of these tests are described in the [appendix](#appendix-the-query-and-aggregation-performance-test).
-
-## Index and query performance considerations
-This section describes some common factors that you should think about when designing indexes that need to support fast querying and searching.
-
-### Storing multiple types in an index
-An Elasticsearch index can contain multiple types. It may be better to avoid this approach and create a separate index for each type. Consider the following points:
-
-* Different types might specify different analyzers, and it is not always clear which analyzer Elasticsearch should use if a query is performed at the index level rather than at the type level. See [Avoiding Type Gotchas](https://www.elastic.co/guide/en/elasticsearch/guide/current/mapping.html#_avoiding_type_gotchas) for details.
-* Shards for indexes that hold multiple types will likely be bigger than those for indexes that contain a single type. The bigger a shard, the more effort is required by Elasticsearch to filter data when performing queries.
-* If there is a significant mismatch between data volumes for the types, information for one type can become sparsely distributed across many shards reducing the efficiency of searches that retrieve this data.
-
- ![The effects of sharing an index between types](./images/query-performance1.png)
-
- ***The effects of sharing an index between types***
-
- In the upper part of the diagram, the same index is shared by documents of type A and type B. There are many more documents of type A than type B. Searches for type A will involve querying all four shards. The lower part of the diagram shows the effect if separate indexes are created for each type. In this case, searches for type A will only require accessing two shards.
-* Small shards can be more evenly distributed than large shards, making it easier for Elasticsearch to spread the load across nodes.
-* Different types might have different retention periods. It can be difficult to archive old data that shares shards with active data.
-
-However, under some circumstances sharing an index across types can be efficient if:
-
-* Searches regularly span types held in the same index.
-* The types only have a small number of documents each. Maintaining a separate set of shards for each type can become a significant overhead in this case.
-
-### Optimizing index types
-An Elasticsearch index contains a copy of the original JSON documents that were used to populate it. This information is held in the [*\_source*](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-source-field.html#mapping-source-field) field of each indexed item. This data is not searchable, but by default is returned by *get* and *search* requests. However, this field incurs overhead and occupies storage, making shards larger and increasing the volume of I/O performed. You can disable the *\_source* field on a per type basis:
-
-```http
-PUT my_index
-{
- "mappings": {
- "my_type": {
- "_source": {
- "enabled": false
- }
- }
- }
-}
-```
-Disabling this field also removes the ability to perform the following operations:
-
-* Updating data in the index by using the *update* API.
-* Performing searches that return highlighted data.
-* Reindexing from one Elasticsearch index directly to another.
-* Changing mappings or analysis settings.
-* Debugging queries by viewing the original document.
-
-### Reindexing data
-The number of shards available to an index ultimately determines the capacity of the index. You can take an initial (and informed) guess at how many shards will be required, but you should always consider your document re-indexing strategy up front. In many cases, reindexing may be an intended task as data grows. You may not want to allocate a large number of shards to an index initially, for the sake of search optimization, but allocate new shards as the volume of data expands. In other cases reindexing might need to be performed on a more ad-hoc basis if your estimates about data volume growth simply prove to be inaccurate.
-
-> [!NOTE]
-> Reindexing might not be necessary for data that ages quickly. In this case, an application might create a new index for each period of time. Examples include performance logs or audit data which could be stored in a fresh index each day.
->
->
-
-
-
-Reindexing effectively involves creating a new index from the data in an old one, and then removing the old index. If an index is large, this process can take time, and you may need to ensure that the data remains searchable during this period. For this reason, you should create an [alias for each index](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-aliases.html), and queries should retrieve data through these aliases. While reindexing, keep the alias pointing at the old index, and then switch it to reference the new index once reindexing is complete. This approach is also useful for accessing time-based data that creates a new index each day. To access the current data use an alias that rolls over to the new index as it is created.
-
-### Managing mappings
-Elasticsearch uses mappings to determine how to interpret the data that occurs in each field in a document. Each type has its own mapping, which effectively defines a schema for that type. Elasticsearch uses this information to generate inverted indexes for each field in the documents in a type. In any document, each field has a datatype (such as *string*, *date*, or *long*) and a value. You can specify the mappings for an index when the index is first created, or they can be inferred by Elasticsearch when new documents are added to a type. However, consider the following points:
-
-* Mappings generated dynamically can cause errors depending on how fields are interpreted when documents are added to an index. For example, document 1 could contain a field A that holds a number and causes Elasticsearch to add a mapping that specifies that this field is a *long*. If a subsequent document is added in which field A contains nonnumeric data, then it will fail. In this case, field A should probably have been interpreted as a string when the first document was added. Specifying this mapping when the index is created can help to prevent such problems.
-* Design your documents to avoid generating excessively large mappings as this can add significant overhead when performing searches, consume lots of memory, and also cause queries to fail to find data. Adopt a consistent naming convention for fields in documents that share the same type. For example, don't use field names such as "first_name", "FirstName", and "forename" in different documents. Use the same field name in each document. Additionally, do not attempt to use values as keys (this is a common approach in Column-Family databases, but can cause inefficiencies and failures with Elasticsearch.) For more information, see [Mapping Explosion](https://www.elastic.co/blog/found-crash-elasticsearch#mapping-explosion).
-* Use *not_analyzed* to avoid tokenization where appropriate. For example, if a document contains a string field named *data* that holds the value "ABC-DEF" then you might attempt to perform a search for all documents that match this value as follows:
-
- ```http
- GET /myindex/mydata/_search
- {
- "query" : {
- "filtered" : {
- "filter" : {
- "term" : {
- "data" : "ABC-DEF"
- }
- }
- }
- }
- }
- ```
-
- However, this search will fail to return the expected results due to the way in which the string ABC-DEF is tokenized when it is indexed. It will be effectively split into two tokens, ABC and DEF, by the hyphen. This feature is designed to support full text searching, but if you want the string to be interpreted as a single atomic item you should disable tokenization when the document is added to the index. You can use a mapping such as this:
-
- ```http
- PUT /myindex
- {
- "mappings" : {
- "mydata" : {
- "properties" : {
- "data" : {
- "type" : "string",
- "index" : "not_analyzed"
- }
- }
- }
- }
- }
- ```
-
- For more information, see [Finding Exact Values](https://www.elastic.co/guide/en/elasticsearch/guide/current/_finding_exact_values.html#_term_filter_with_text).
-
-### Using doc values
-Many queries and aggregations require that data is sorted as part of the search operation. Sorting requires being able to map one or more terms to a list of documents. To assist in this process, Elasticsearch can load all of the values for a field used as a sort key into memory. This information is known as *fielddata*. The intent is that caching fielddata in memory incurs less I/O and might be faster than repeatedly reading the same data from disk. However, if a field has high cardinality then storing the fielddata in memory can consume a lot of heap space, possibly impacting the ability to perform other concurrent operations, or even exhausting storage causing Elasticsearch to fail.
-
-As an alternative approach, Elasticsearch also supports *doc values*. A doc value is similar to an item of in-memory fielddata except that it is stored on disk and created when data is stored in an index (fielddata is constructed dynamically when a query is performed.) Doc values do not consume heap space, and so are useful for queries that sort or aggregate data across fields that can contain a very large number of unique values. Additionally, the reduced pressure on the heap can help to offset the performance differences between retrieving data from disk and reading from memory. Garbage collection is likely to occur less often, and other concurrent operations that utilize memory are less likely to be effected.
-
-You enable or disable doc values on a per-property basis in an index using the *doc_values* attribute, as shown by the following example:
-
-```http
-PUT /myindex
-{
- "mappings" : {
- "mydata" : {
- "properties" : {
- "data" : {
- ...
- "doc_values": true
- }
- }
- }
- }
-}
-```
-> [!NOTE]
-> Doc values are enabled by default with Elasticsearch version 2.0.0 onwards.
->
->
-
-The exact impact of using doc values is likely to be highly specific to your own data and query scenarios, so be prepared to conduct performance testing to establish their usefulness. You should also note that doc values do not work with analyzed string fields.
-
-### Using replicas to reduce query contention
-A common strategy to boost the performance of queries is to create many replicas of each index. Data retrieval operations can be satisfied by fetching data from a replica. However, this strategy can severely impact the performance of data ingestion operations, so it needs to be used with care in scenarios that involve mixed workloads. Additionally, this strategy is only of benefit if replicas are distributed across nodes and do not compete for resources with primary shards that are part of the same index. Remember that it is possible to increase or decrease the number of replicas for an index dynamically.
-
-### Using the shard request cache
-Elasticsearch can cache the local data requested by queries on each shard in memory. This enables searches that retrieve the same data to run more quickly, data can be read from memory rather than disk storage. Caching data in this way can therefore improve the performance of some search operations, at the cost of reducing the memory available to other tasks being performed concurrently. There is also the risk that data served from the cache is outdated. The data in the cache is only invalidated when the shard is refreshed and the data has changed. The frequency of refreshes is governed by the value of the *refresh_interval* setting of the index.
-
-The request caching for an index is disabled by default, but you can enable it as follows:
-
-```http
-PUT /myindex/_settings
-{
- "index.requests.cache.enable": true
-}
-```
-
-The shard request cache is most suitable for information that remains relatively static, such as historical or logging data.
-
-### Using client nodes
-All queries are processed by the node that first receives the request. This node sends further requests to all other nodes containing shards for the indices being queried, and then accumulates the results for returning the response. If a query involves aggregating data or performing complex computations, the initial node is responsible for performing the appropriate processing. If your system has to support a relatively small number of complex queries, consider creating a pool of client nodes to alleviate the load from the data nodes. Conversely, if your system has to handle a large number of simple queries, then submit these requests direct to the data nodes, and use a load balancer to distribute the requests evenly.
-
-### Tuning queries
-The following points summarize tips for maximizing the performance of Elasticsearch queries:
-
-* Avoid queries that involve wild cards wherever possible.
-* If the same field is subject to full-text searching and exact matching, then consider storing the data for the field in analyzed and nonanalyzed forms. Perform full-text searches against the analyzed field, and exact matches against the nonanalyzed field.
-* Only return the data necessary. If you have large documents, but an application only requires information held in a subset of the fields, then return this subset from queries rather than entire documents. This strategy can reduce the network bandwidth requirements of the cluster.
-* Wherever possible, use filters instead of queries when searching for data. A filter simply determines whether a document matches a given criterion whereas a query also calculates how close a match a document is (scoring). Internally, the values generated by a filter are stored as a bitmap indicating match/no match for each document, and they can be cached by Elasticsearch. If the same filter criterion occurs subsequently, the bitmap can be retrieved from cache and used to quickly fetch the matching documents. For more information, see [Internal Filter Operation](https://www.elastic.co/guide/en/elasticsearch/guide/current/_finding_exact_values.html#_internal_filter_operation).
-* Use *bool* filters for performing static comparisons, and only use *and*, *or*, and *not* filters for dynamically calculated filters, such as those that involve scripting or the *geo-\** filters.
-* If a query combines *bool* filters with *and*, *or*, or *not* with *geo-** filters, place the *and*/*or*/*not geo-** filters last so that they operate on the smallest data set possible.
-
- Similarly, use a *post_filter* to run expensive filter operations. These filters will be performed last.
-* Use aggregations rather than facets. Avoid calculating aggregates that are analyzed or that have many possible values.
-
- > **Note**: Facets have been removed in Elasticsearch version 2.0.0.
- >
- >
-* Use the *cardinality* aggregation in preference to the *value_count* aggregation unless your application requires an exact count of matching items. An exact count can become quickly outdated, and many applications only require a reasonable approximation.
-* Avoid scripting. Scripts in queries and filters can be expensive and the results are not cached. Long-running scripts can consume search threads indefinitely, causing subsequent requests to be queued. If the queue fills up, further requests will be rejected.
-
-## Testing and analyzing aggregation and search performance
-This section describes the results of a series of tests that were performed against varying cluster and index configurations. Two types of tests were performed, as follows:
-
-* **The *ingestion and query* test**. This test started with an empty index that was populated as the test proceeded by performing bulk insert operations (each operation added 1000 documents). At the same time, a number of queries designed to search for documents added during the preceding 15 minute period and generate aggregations were repeated at 5 second intervals. This test was typically allowed to run for 24 hours, to reproduce the effects of a challenging workload comprising large-scale data ingestion with near real-time queries.
-* **The *query-only* test**. This test is similar to the *ingestion and query* test except that the ingestion part is omitted, and the index on each node is prepopulated with 100 million documents. An amended set of queries is performed; the time element limiting documents to those added in the last 15 minutes was removed as the data was now static. The tests ran for 90 minutes, there is less time needed to establish a performance pattern due to the fixed amount of data.
-
-- - -
-Each document in the index had the same schema. This table summarizes the fields in the schema:
-
-| Name | Type | Notes |
-| --- | --- | --- |
-| Organization |String |The test generates 200 unique organizations. |
-| CustomField1 - CustomField5 |String |These are five string fields which are set to the empty string. |
-| DateTimeRecievedUtc |Timestamp |The date and time at which the document was added. |
-| Host |String |This field is set to the empty string. |
-| HttpMethod |String |This field is set to one of the following values: "POST","GET","PUT". |
-| HttpReferrer |String |This field is set to the empty string. |
-| HttpRequest |String |This field is populated with random text between 10 and 200 characters in length. |
-| HttpUserAgent |String |This field is set to the empty string. |
-| HttpVersion |String |This field is set to the empty string. |
-| OrganizationName |String |This field is set to the same value as the Organization field. |
-| SourceIp |IP |This field contains an IP address indicating the "origin" of the data. |
-| SourceIpAreaCode |Long |This field is set to 0. |
-| SourceIpAsnNr |String |This field is set to "AS\#\#\#\#\#". |
-| SourceIpBase10 |Long |This field is set to 500. |
-| SourceIpCountryCode |String |This field contains a 2-character country code. |
-| SourceIpCity |String |This field contains a string identifying a city in a country. |
-| SourceIpLatitude |Double |This field contains a random value. |
-| SourceIpLongitude |Double |This field contains a random value. |
-| SourceIpMetroCode |Long |This field is set to 0. |
-| SourceIpPostalCode |String |This field is set to the empty string. |
-| SourceLatLong |Geo point |This field is set to a random geo point. |
-| SourcePort |String |This field is populated with the string representation of a random number. |
-| TargetIp |IP |This is populated with a random IP address in the range 0.0.100.100 to 255.9.100.100. |
-| SourcedFrom |String |This field is set to the string "MonitoringCollector". |
-| TargetPort |String |This field is populated with the string representation of a random number. |
-| Rating |String |This field is populated with one of 20 different string values selected at random. |
-| UseHumanReadableDateTimes |Boolean |This field is set to false. |
-
-The following queries were performed as a batch by each iteration of the tests. The names in italics are used to refer to these queries in the remainder of this document. Note that the time criterion (documents added in the last 15 minutes) was omitted from the *query-only* tests:
-
-* How many documents with each *Rating* value have been entered in the last 15 minutes (*Count By rating*)?
-* How many documents have been added in each 5 minute interval during the last 15 minutes (*Count over time*)?
-* How many documents of each *Rating* value have been added for each country in the last 15 minutes (*Hits by country*)?
-* Which 15 organizations occur most frequently in documents added in the last 15 minutes (*Top 15 organizations*)?
-* How many different organizations occur in documents added in the last 15 minutes (*Unique count organizations*)?
-* How many documents have been added in the last 15 minutes (*Total hits count*)?
-* How many different *SourceIp* values occur in documents added in the last 15 minutes (*Unique IP count*)?
-
-The definition of the index and the details of the queries are outlined in the [appendix](#appendix-the-query-and-aggregation-performance-test).
-
-The tests were designed to understand the effects of the following variables:
-
-* **Disk type**. Tests were performed on a 6-node cluster of D4 VMs using standard storage (HDDs) and repeated on a 6-node cluster of DS4 VMs using premium storage (SSDs).
-* **Machine size - scaling up**. Tests were performed on a 6-node cluster comprising DS3 VMs (designated as the *small* cluster), repeated on a cluster of DS4 VMs (the *medium* cluster), and repeated again on a cluster of DS14 machines (the *large* cluster). The following table summarizes the key characteristics of each VM SKU:
-
- | Cluster | VM SKU | Number of cores | Number of data disks | RAM (GB) |
- | --- | --- | --- | --- | --- |
- | Small |Standard DS3 |4 |8 |14 |
- | Medium |Standard DS4 |8 |16 |28 |
- | Large |Standard DS14 |16 |32 |112 |
-* **Cluster size - scaling out**. Tests were performed on clusters of DS14 VMs containing 1, 3, and 6 nodes.
-* **Number of index replicas**. Tests were performed using indexes configured with 1 and 2 replicas.
-* **Doc values**. Initially the tests were performed with the index setting *doc_values* set to *true* (the default value). Selected tests were repeated with *doc_values* set to *false*.
-* **Caching**. Tests were conducted with the shard request cache enabled on the index.
-* **Number of shards**. Tests were repeated using varying numbers of shards to establish whether queries ran more efficiently across indexes containing fewer, larger shards or more, smaller shards.
-
-## Performance results - disk type
-Disk performance was evaluated by running the *ingestion and query* test on the 6-node cluster of D4 VMs (using HDDs), and on the 6-node cluster of DS4 VMs (using SSDs). The configuration of Elasticsearch in both clusters was the same. The data was spread across 16 disks on each node, and each node had 14GB of RAM allocated to the Java virtual machine (JVM) running Elasticsearch, the remaining memory (also 14GB) was left for operating system use. Each test ran for 24 hours. This period was selected to enable the effects of the increasing volume of data to become apparent and to allow the system to stabilize. The table below summarizes the results, highlighting the response times of the various operations that comprised the test.
-
-| Cluster | Operation/query | Average response time (ms) |
-| --- | --- | --- |
-| D4 |Ingestion |978 |
-| Count By Rating |103 | |
-| Count Over Time |134 | |
-| Hits By Country |199 | |
-| Top 15 Organizations |137 | |
-| Unique Count Organizations |139 | |
-| Unique IP Count |510 | |
-| Total Hits Count |89 | |
-| DS4 |Ingestion |511 |
-| Count By Rating |187 | |
-| Count Over Time |411 | |
-| Hits By Country |402 | |
-| Top 15 Organizations |307 | |
-| Unique Count Organizations |320 | |
-| Unique IP Count |841 | |
-| Total Hits Count |236 | |
-
-At first glance, it would appear that the DS4 cluster performed queries less well than the D4 cluster, at times doubling (or worse) the response time. This does not tell the whole story though. The next table shows the number of ingestion operations performed by each cluster (remember that each operation loads 1000 documents):
-
-| Cluster | Ingestion operation count |
-| --- | --- |
-| D4 |264769 |
-| DS4 |503157 |
-
-The DS4 cluster was able to load nearly twice as much data than the D4 cluster during the test. Therefore, when analyzing the response times for each operation, you also need to consider how many documents each query has to scan, and how many documents are returned. These are dynamic figures as the volume of documents in the index is continually growing. You cannot simply divide 503137 by 264769 (the number of ingestion operations performed by each cluster) and then multiply the result by the average response time for each query performed by the D4 cluster to give a comparative information as this ignores the amount of I/O being performed concurrently by the ingestion operation. Instead, you should measure the physical amount of data being written to and read from disk as the test proceeds. The JMeter test plan captures this information for each node. The summarized results are:
-
-| Cluster | Average bytes written/read by each operation |
-| --- | --- |
-| D4 |13471557 |
-| DS4 |24643470 |
-
-This data shows that the DS4 cluster was able to sustain an I/O rate approximately 1.8 times that of the D4 cluster. Given that, apart from nature of the disks, all other resources are the same, the difference must be due to using SSDs rather HDDs.
-
-To help justify this conclusion, the following graphs illustrate the how the I/O was performed over time by each cluster:
-
-![Disk activity graph](./images/query-performance2.png)
-
-
-
-***Disk activity for the D4 and DS4 clusters***
-
-The graph for the D4 cluster shows significant variation, especially during the first half of the test. This was likely due to throttling to reduce the I/O rate. In the initial stages of the test, the queries are able to run quickly as there is little data to analyze. The disks in the D4 cluster are therefore likely to be operating close to their input/output operations per second (IOPS) capacity, although each I/O operation might not be returning much data. The DS4 cluster is able to support a higher IOPS rate and does not suffer the same degree of throttling, the I/O rates are more regular. To support this theory, the next pair of graphs show how the CPU was blocked by disk I/O over time (the disk wait times shown in the graphs are the proportion of the time that the CPU spent waiting for I/O):
-
-![Disk wait times graph](./images/query-performance3.png)
-
-***CPU disk I/O wait times for the D4 and DS4 clusters***
-
-It is important to understand that there are two predominant reasons for I/O operations to block the CPU:
-
-* The I/O subsystem could be reading or writing data to or from disk.
-* The I/O subsystem could be throttled by the host environment. Azure disks implemented by using HDDs have a maximum throughput of 500 IOPS, and SSDs have a maximum throughput of 5000 IOPS.
-
-For the D4 cluster, the amount of time spent waiting for I/O during the first half of the test correlates closely in an inverted manner with the graph showing the I/O rates. Periods of low I/O correspond to periods of significant time the CPU spends blocked, this indicates that I/O is being throttled. As more data is added to the cluster the situation changes, and in the second half of the test peaks in I/O wait times correspond with peaks in I/O throughput. At this point, the CPU is blocked while performing real I/O. Again, with the DS4 cluster, the time spent waiting for I/O is much more even. Each peak matches an equivalent peak in I/O performance rather than a trough, this implies that there is little or no throttling occurring.
-
-There is one other factor to consider. During the test, the D4 cluster generated 10584 ingestion errors, and 21 query errors. The test on the DS4 cluster produced no errors.
-
-## Performance results - scaling up
-Scale-up testing was performed by running tests against 6-node clusters of DS3, DS4, and DS14 VMs. These SKUS were selected because a DS4 VM provides twice as many CPU cores and memory as a DS3, and a DS14 machine doubles the CPU resources again while providing four times the amount of memory. The table below compares the key aspects of each SKU:
-
-| SKU | \#CPU cores | Memory (GB) | Max disk IOPS | Max bandwidth (MB/s) |
-| --- | --- | --- | --- | --- |
-| DS3 |4 |14 |12,800 |128 |
-| DS4 |8 |28 |25,600 |256 |
-| DS14 |16 |112 |50,000 |512 |
-
-The following table summarizes the results of running the tests on the small (DS3), medium (DS4), and large (DS14) clusters. Each VM used SSDs to hold the data. Each test ran for 24 hours.
-
-The table reports the number of successful requests for each type of query (failures are not included). The number of requests attempted for each type of query is roughly the same during a run of the test. This is because the JMeter test plan executes a single occurrence of each query (Count By Rating, Count Over Time, Hits By Country, Top 15 Organizations, Unique Count Organizations, Unique IP Count, and Total Hits Count ) together in a single unit known as a *test transaction* (this transaction is independent of the task that performs the ingestion operation, which is run by a separate thread). Each iteration of the test plan performs a single test transaction. The number of test transactions completed is therefore a measure of the response time of the slowest query in each transaction.
-
-| Cluster | Operation/query | Number of requests | Average response time (ms) |
-| --- | --- | --- | --- |
-| Small (DS3) |Ingestion |207284 |3328 |
-| Count By Rating |18444 |268 | |
-| Count Over Time |18444 |340 | |
-| Hits By Country |18445 |404 | |
-| Top 15 Organizations |18439 |323 | |
-| Unique Count Organizations |18437 |338 | |
-| Unique IP Count |18442 |468 | |
-| Total Hits Count |18428 |294 | |
-| | | | |
-| Medium (DS4) |Ingestion |503157 |511 |
-| Count By Rating |6958 |187 | |
-| Count Over Time |6958 |411 | |
-| Hits By Country |6958 |402 | |
-| Top 15 Organizations |6958 |307 | |
-| Unique Count Organizations |6956 |320 | |
-| Unique IP Count |6955 |841 | |
-| Total Hits Count |6958 |236 | |
-| | | | |
-| Large (DS14) |Ingestion |502714 |511 |
-| Count By Rating |7041 |201 | |
-| Count Over Time |7040 |298 | |
-| Hits By Country |7039 |363 | |
-| Top 15 Organizations |7038 |244 | |
-| Unique Count Organizations |7037 |283 | |
-| Unique IP Count |7037 |681 | |
-| Total Hits Count |7038 |200 | |
-
-These figures show that, for this test, the performance of the DS4 and DS14 cluster were reasonably similar. The response times for the query operations for the DS3 cluster also appear to compare favorably initially, and the number of query operations performed is far in excess of the values for the DS4 and DS14 cluster. However, one should also take strong notice of the ingestion rate and the consequent number of documents being searched. In the DS3 cluster ingestion is far more constrained, and by the end of the test the database only contained about 40% of the documents read in by each of the other two clusters. This is could be due to the processing resources, network, and disk bandwidth available to a DS3 VM compared to a DS4 or DS14 VM. Given that a DS4 VM has twice as many resources available as a DS3 VM, and a DS14 has twice (four times for memory) the resources of a DS4 VM, one question remains: Why is the difference in ingestion rates between the DS4 and DS14 clusters significantly less than that which occurs between the DS3 and DS4 clusters? This could be due to the network utilization and bandwidth limits of Azure VMs. The graphs below show this data for all three clusters:
-
-![Disk activity graph](./images/query-performance4.png)
-
-**Network utilization for the DS3, DS4, and DS14 clusters performing the ingestion and query test**
-
-
-
-The limits of available network bandwidth with Azure VMs are not published and can vary, but the fact that network activity appears to have levelled off at an average of around 2.75GBps for both the DS4 and DS14 tests suggests that such a limit has been reached and has become the primary factor in restricting throughput. In the case of the DS3 cluster, the network activity was considerably lower so the lower performance is more likely due to constraints in the availability of other resources.
-
-To isolate the effects of the ingestion operations and illustrate how query performance varies as nodes scale up, a set of query-only tests was performed using the same nodes. The following table summarizes the results obtained on each cluster:
-
-> [!NOTE]
-> You should not compare the performance and number of requests executed by queries in the *query-only* test with those run by the *ingestion and query* test. This is because the queries have been modified and the volume of documents involved is different.
->
->
-
-| Cluster | Operation/query | Number of requests | Average response Ttme (ms) |
-| --- | --- | --- | --- |
-| Small (DS3) |Count By Rating |464 |11758 |
-| Count Over Time |464 |14699 | |
-| Hits By Country |463 |14075 | |
-| Top 15 Organizations |464 |11856 | |
-| Unique Count Organizations |462 |12314 | |
-| Unique IP Count |461 |19898 | |
-| Total Hits Count |462 |8882 | |
-| | | | |
-| Medium (DS4) |Count By Rating |1045 |4489 |
-| Count Over Time |1045 |7292 | |
-| Hits By Country |1053 |7564 | |
-| Top 15 Organizations |1055 |5066 | |
-| Unique Count Organizations |1051 |5231 | |
-| Unique IP Count |1051 |9228 | |
-| Total Hits Count |1051 |2180 | |
-| | | | |
-| Large (DS14) |Count By Rating |1842 |1927 |
-| Count Over Time |1839 |4483 | |
-| Hits By Country |1838 |4761 | |
-| Top 15 Organizations |1842 |2117 | |
-| Unique Count Organizations |1837 |2393 | |
-| Unique IP Count |1837 |7159 | |
-| Total Hits Count |1837 |642 | |
-
-This time, the trends in the average response times across the different clusters is clearer. Network utilization is well below the 2.75GBps required earlier by the DS4 and DS14 clusters (which probably saturated the network in the ingestion and query tests), and the 1.5GBps for the DS3 cluster. In fact, it is closer to 200MBps in all cases as shown by the graphs below:
-
-![CPU utilization graph](./images/query-performance5.png)
-
-***Network utilization for the DS3, DS4 and DS14 clusters performing the query-only test***
-
-The limiting factor in the DS3 and DS4 clusters now appears to be CPU utilization, which is close to 100% for much of the time. In the DS14 cluster the CPU usage averages just over 80%. This is still high, but clearly highlights the advantages of having more CPU cores available. The following image depicts the CPU usage patterns for the DS3, DS4, and DS14 clusters.
-
-![CPU utilization graph](./images/query-performance6.png)
-
-***CPU utilization for the DS3 and DS14 clusters performing the query-only test***
-
-## Performance results - scaling out
-To illustrate how the system scales out with the number of nodes, tests were run using DS14 clusters containing 1, 3, and 6 nodes. This time, only the *query-only* test was performed, using 100 million documents and running for 90 minutes:
-
-| Cluster | Operation/query | Number of requests | Average response time (ms) |
-| --- | --- | --- | --- |
-| 1 Node |Count By Rating |288 |6216 |
-| Count Over Time |288 |28933 | |
-| Hits By Country |288 |29455 | |
-| Top 15 Organizations |288 |9058 | |
-| Unique Count Organizations |287 |19916 | |
-| Unique IP Count |284 |54203 | |
-| Total Hits Count |287 |3333 | |
-| | | | |
-| 3 Nodes |Count By Rating |1194 |3427 |
-| Count Over Time |1194 |5381 | |
-| Hits By Country |1191 |6840 | |
-| Top 15 Organizations |1196 |3819 | |
-| Unique Count Organizations |1190 |2938 | |
-| Unique IP Count |1189 |12516 | |
-| Total Hits Count |1191 |1272 | |
-| | | | |
-| 6 Nodes |Count By Rating |1842 |1927 |
-| Count Over Time |1839 |4483 | |
-| Hits By Country |1838 |4761 | |
-| Top 15 Organizations |1842 |2117 | |
-| Unique Count Organizations |1837 |2393 | |
-| Unique IP Count |1837 |7159 | |
-| Total Hits Count |1837 |642 | |
-
-The number of nodes makes a significant difference in the query performance of the cluster, although in a nonlinear manner. The 3 node cluster completes approximately 4 times as many queries as the single node cluster, while the 6 node cluster handles 6 times as many. To help explain this nonlinearity, the following graphs show how the CPU was being consumed by the three clusters:
-
-![CPU utilization graph](./images/query-performance7.png)
-
-***CPU utilization for the 1, 3, and 6-node clusters performing the query-only test***
-
-The single-node and 3-node clusters are CPU-bound, while although CPU utilization is high in the 6-node cluster there is spare processing capacity available. In this case, other factors are likely to be limiting the throughput. This could be confirmed by testing with 9 and 12 nodes, which would likely show further spare processing capacity.
-
-The data in the table above also shows how the average response times for the queries vary. This is the item that is most informative when testing how a system scales for specific types of query. Some searches are clearly far more efficient when spanning more nodes than others. This could be due to the ratio between the number of nodes and the number of documents in the cluster increasing, each cluster contained 100 million documents. When performing searches that involve aggregating data, Elasticsearch will process and buffer the data retrieved as part of the aggregation process in memory on each node. If there are more nodes, there is less data to retrieve, buffer, and process on each node.
-
-## Performance results - number of replicas
-The *ingestion and query* tests were run against an index with a single replica. The tests were repeated on the 6-node DS4 and DS14 clusters using an index configured with two replicas. All tests ran for 24 hours. The table below shows the comparative results for one and two replicas:
-
-| Cluster | Operation/query | Average response time (ms) - 1 replica | Average response time (ms) - 2 replicas | % difference in response time |
-| --- | --- | --- | --- | --- |
-| DS4 |Ingestion |511 |655 |+28% |
-| Count By Rating |187 |168 |-10% | |
-| Count Over Time |411 |309 |-25% | |
-| Hits By Country |402 |562 |+40% | |
-| Top 15 Organizations |307 |366 |+19% | |
-| Unique Count Organizations |320 |378 |+18% | |
-| Unique IP Count |841 |987 |+17% | |
-| Total Hits Count |236 |236 |+0% | |
-| | | | | |
-| DS14 |Ingestion |511 |618 |+21% |
-| Count By Rating |201 |275 |+37% | |
-| Count Over Time |298 |466 |+56% | |
-| Hits By Country |363 |529 |+46% | |
-| Top 15 Organizations |244 |407 |+67% | |
-| Unique Count Organizations |283 |403 |+42% | |
-| Unique IP Count |681 |823 |+21% | |
-| Total Hits Count |200 |221 |+11% | |
-
-The ingestion rate decreased as the number of replicas increased. This should be expected as Elasticsearch is writing more copies of each document, generating additional disk I/O. This is reflected by the graphs for the DS14 cluster for indexes with 1 and 2 replicas shown in the image below. In the case of the index with 1 replica, the average I/O rate was 16896573 bytes/second. For the index with 2 replicas, the average I/O rate was 33986843 bytes/second, just over twice as many.
-
-![Disk activity graph](./images/query-performance8.png)
-
-***Disk I/O rates for nodes with 1 and 2 replicas performing the ingestion and query test***
-
-| Cluster | Query | Average response time (ms) - 1 replica | Average response time (ms) - 2 replicas |
-| --- | --- | --- | --- |
-| DS4 |Count By Rating |4489 |4079 |
-| Count Over Time |7292 |6697 | |
-| Hits By Country |7564 |7173 | |
-| Top 15 Organizations |5066 |4650 | |
-| Unique Count Organizations |5231 |4691 | |
-| Unique IP Count |9228 |8752 | |
-| Total Hits Count |2180 |1909 | |
-| | | | |
-| DS14 |Count By Rating |1927 |2330 |
-| Count Over Time |4483 |4381 | |
-| Hits By Country |4761 |5341 | |
-| Top 15 Organizations |2117 |2560 | |
-| Unique Count Organizations |2393 |2546 | |
-| Unique IP Count |7159 |7048 | |
-| Total Hits Count |642 |708 | |
-
-These results show an improvement in average response time for the DS4 cluster, but an increase for the DS14 cluster. To help interpret these results, you should also consider the number of queries performed by each test:
-
-| Cluster | Query | Number performed - 1 replica | Number performed - 2 replicas |
-| --- | --- | --- | --- |
-| DS4 |Count By Rating |1054 |1141 |
-| Count Over Time |1054 |1139 | |
-| Hits By Country |1053 |1138 | |
-| Top 15 Organizations |1055 |1141 | |
-| Unique Count Organizations |1051 |1136 | |
-| Unique IP Count |1051 |1135 | |
-| Total Hits Count |1051 |1136 | |
-| | | | |
-| DS14 |Count By Rating |1842 |1718 |
-| Count Over Time |1839 |1716 | |
-| Hits By Country |1838 |1714 | |
-| Top 15 Organizations |1842 |1718 | |
-| Unique Count Organizations |1837 |1712 | |
-| Unique IP Count |1837 |1712 | |
-| Total Hits Count |1837 |1712 | |
-
-This data shows that the number of queries performed by the DS4 cluster increased in line with the decrease in average response time, but again the converse is true of the DS14 cluster. One significant factor is that the CPU utilization of the DS4 cluster in the 1-replica and 2-replica tests was unevenly spread. Some nodes exhibited close to 100% utilization while others had spare processing capacity. The improvement in performance is most likely due to the increased ability to distribute processing across the nodes of the cluster. The following image shows the variation in CPU processing between the most lightly and heavily used VMs (nodes 4 and 3):
-
-![CPU utilization graph](./images/query-performance9.png)
-
-***CPU utilization for the least used and most used nodes in the DS4 cluster performing the query-only test***
-
-For the DS14 cluster this was not the case. CPU utilization for both tests was lower across all nodes, and the availability of a second replica became less of an advantage and more of an overhead:
-
-![CPU utilization graph](./images/query-performance10.png)
-
-***CPU utilization for the least used and most used nodes in the DS14 cluster performing the query-only test***
-
-These results show the need to benchmark your system carefully when deciding whether to use multiple replicas. You should always have at least one replica of each index (unless you are willing to risk losing data if a node fails), but additional replicas can impose a burden on the system for little benefit, depending on your workloads and the hardware resources available to the cluster.
-
-## Performance results - doc values
-The *ingestion and query* tests were conducted with doc values enabled, causing Elasticsearch to store data used for sorting fields on disk. The tests were repeated with doc values disabled, so Elasticsearch constructed fielddata dynamically and cached it in memory. All tests ran for 24 hours. The table below compares the response times for tests run against clusters of 6 nodes built using D4, DS4, and DS14 VMs (the D4 cluster uses regular hard disks, while the DS4 and DS14 clusters use SSDs).
-
-| Cluster | Operation/query | Average response time (ms) - doc values enabled | Average response time (ms) - doc values disabled | % difference in response time |
-| --- | --- | --- | --- | --- |
-| D4 |Ingestion |978 |835 |-15% |
-| Count By Rating |103 |132 |+28% | |
-| Count Over Time |134 |189 |+41% | |
-| Hits By Country |199 |259 |+30% | |
-| Top 15 Organizations |137 |184 |+34% | |
-| Unique Count Organizations |139 |197 |+42% | |
-| Unique IP Count |510 |604 |+18% | |
-| Total Hits Count |89 |134 |+51% | |
-| | | | | |
-| DS4 |Ingestion |511 |581 |+14% |
-| Count By Rating |187 |190 |+2% | |
-| Count Over Time |411 |409 |-0.5% | |
-| Hits By Country |402 |414 |+3% | |
-| Top 15 Organizations |307 |284 |-7% | |
-| Unique Count Organizations |320 |313 |-2% | |
-| Unique IP Count |841 |955 |+14% | |
-| Total Hits Count |236 |281 |+19% | |
-| | | | | |
-| DS14 |Ingestion |511 |571 |+12% |
-| Count By Rating |201 |232 |+15% | |
-| Count Over Time |298 |341 |+14% | |
-| Hits By Country |363 |457 |+26% | |
-| Top 15 Organizations |244 |338 |+39% | |
-| Unique Count Organizations |283 |350 |+24% | |
-| Unique IP Count |681 |909 |+33% | |
-| Total Hits Count |200 |245 |+23% | |
-
-The next table compares the number of ingestion operations performed by the tests:
-
-| Cluster | Ingestion operation count - doc values enabled | Ingestion Operation count - doc values disabled | % difference in \number of ingestion operations |
-| --- | --- | --- | --- |
-| D4 |264769 |408690 |+54% |
-| DS4 |503137 |578237 |+15% |
-| DS14 |502714 |586472 |+17% |
-
-The improved ingestion rates occur with doc values disabled as less data is being written to disk as documents are inserted. The improved performance is especially noticeable with the D4 VM using HDDs to store data. In this case, the response time for ingestion operations also decreased by 15% (see the first table in this section). This could be due to the reduced pressure on the HDDs which were likely running close to their IOPS limits in the test with doc values enabled, see the Disk Type test for more information. The following graph compares the I/O performance of the D4 VMs with doc values enabled (values held on disk) and doc values disabled (values held in memory):
-
-![Disk activity graph](./images/query-performance11.png)
-
-***Disk activity for the D4 cluster with doc values enabled and disabled***
-
-In contrast, the ingestion values for the VMs using SSDs show a small increase in the number of documents but also an increase in the response time of the ingestion operations. With one or two small exceptions, the query response times were also worse. The SSDs are less likely to be running close to their IOPS limits with doc values enabled, so changes in performance are more likely due to increased processing activity and the overhead of managing the JVM heap. This is evident by comparing the CPU utilization with doc values enabled and disabled. The next graph highlights this data for the DS4 cluster, where most of the CPU utilization moves from the 30%-40% band with doc values enabled, to the 40%-50% band with doc values disabled (the DS14 cluster showed a similar trend):
-
-![CPU utilization graph](./images/query-performance12.png)
-
-***CPU utilization for the DS4 cluster with doc values enabled and disabled***
-
-To distinguish the effects of doc values on query performance from data ingestion, pairs of query-only tests were performed for the DS4 and DS14 clusters with doc values enabled and disabled. The table below summarizes the results of these tests:
-
-| Cluster | Operation/query | Average response time (ms) - doc values enabled | Average response time (ms) - doc values disabled | % difference in response time |
-| --- | --- | --- | --- | --- |
-| DS4 |Count By Rating |4489 |3736 |-16% |
-| Count Over Time |7293 |5459 |-25% | |
-| Hits By Country |7564 |5930 |-22% | |
-| Top 15 Organizations |5066 |3874 |-14% | |
-| Unique Count Organizations |5231 |4483 |-2% | |
-| Unique IP Count |9228 |9474 |+3% | |
-| Total Hits Count |2180 |1218 |-44% | |
-| | | | | |
-| DS14 |Count By Rating |1927 |2144 |+11% |
-| Count Over Time |4483 |4337 |-3% | |
-| Hits By Country |4761 |4840 |+2% | |
-| Top 15 Organizations |2117 |2302 |+9% | |
-| Unique Count Organizations |2393 |2497 |+4% | |
-| Unique IP Count |7159 |7639 |+7% | |
-| Total Hits Count |642 |633 |-1% | |
-
-Remember that, with Elasticsearch 2.0 onwards, doc values are enabled by default. In the tests covering the DS4 cluster, disabling doc values appears to have a positive effect overall, whereas the converse is generally true for the DS14 cluster (the two cases where performance is better with doc values disabled are very marginal).
-
-For the DS4 cluster, CPU utilization in both cases was close to 100% for the duration of both tests indicating that the cluster was CPU bound. However, the number of queries processed decreased from 7369 to 5894 (20%). Remember that if doc values are disabled Elasticsearch will dynamically generate fielddata in memory, and this consumes CPU power. This configuration has reduced the rate of disk I/O but increased stress on CPUs already running close to their maximum capabilities, so in this case queries are faster with doc values disabled but there are fewer of them.
-
-In the DS14 tests with and without doc values CPU activity was high, but not 100%. The number of queries performed was slightly higher (approximately 4%) in tests with doc values enabled:
-
-| Cluster | Query | Number performed - doc values enabled | Number performed - doc values disabled |
-| --- | --- | --- | --- |
-| DS4 |Count By Rating |1054 |845 |
-| Count Over Time |1054 |844 | |
-| Hits By Country |1053 |842 | |
-| Top 15 Organizations |1055 |846 | |
-| Unique Count Organizations |1051 |839 | |
-| Unique IP Count |1051 |839 | |
-| Total Hits Count |1051 |839 | |
-| | | | |
-| DS14 |Count By Rating |1772 |1842 |
-| Count Over Time |1772 |1839 | |
-| Hits By Country |1770 |1838 | |
-| Top 15 Organizations |1773 |1842 | |
-| Unique Count Organizations |1769 |1837 | |
-| Unique IP Count |1768 |1837 | |
-| Total Hits Count |1769 |1837 | |
-
-## Performance results - shard request cache
-To demonstrate how the caching index data in the memory of each node can impact performance, the *query and ingestion* test was conducted on a DS4 and a DS14 6-node cluster with index caching enabled - see the section [Using the shard request cache](#using-the-shard-request-cache) for more information. The results were compared to those generated by the earlier tests using the same index but with index caching disabled. The table below summarizes the results. Note that the data has been curtailed to cover only the first 90 minutes of the test, at this point the comparative trend was apparent and continuing the test would probably have not yielded any additional insights:
-
-| Cluster | Operation/query | Average response time (ms) - index cache disabled | Average response time (ms) - index cache enabled | % difference in response time |
-| --- | --- | --- | --- | --- |
-| DS4 |Ingestion |504 |3260 |+547% |
-| Count By Rating |218 |273 |+25% | |
-| Count Over Time |450 |314 |-30% | |
-| Hits By Country |447 |397 |-11% | |
-| Top 15 Organizations |342 |317 |-7% | |
-| Unique Count Organizations |370 |324 |-12%% | |
-| Unique IP Count |760 |355 |-53% | |
-| Total Hits Count |258 |291 |+12% | |
-| | | | | |
-| DS14 |Ingestion |503 |3365 |+569% |
-| Count By Rating |234 |262 |+12% | |
-| Count Over Time |357 |298 |-17% | |
-| Hits By Country |416 |383 |-8% | |
-| Top 15 Organizations |272 |324 |-7% | |
-| Unique Count Organizations |330 |321 |-3% | |
-| Unique IP Count |674 |352 |-48% | |
-| Total Hits Count |227 |292 |+29% | |
-
-This data shows two points of interest:
-
-* Data ingestion rates appear to be greatly diminished by enabling index caching.
-* Index caching does not necessarily improve the response time of all types of query, and can have an adverse effect on certain aggregate operations such as those performed by the Count By Rating and Total Hits Count queries.
-
-To understand why the system exhibits this behavior, you should consider the number of queries successfully performed in each case during the test runs. The following table summarizes this data:
-
-| Cluster | Operation/query | Operations/queries count - index cache disabled | Operations/queries count - index cache enabled |
-| --- | --- | --- | --- |
-| DS4 |Ingestion |38611 |13232 |
-| Count By Rating |524 |18704 | |
-| Count Over Time |523 |18703 | |
-| Hits By Country |522 |18702 | |
-| Top 15 Organizations |521 |18706 | |
-| Unique Count Organizations |521 |18700 | |
-| Unique IP Count |521 |18699 | |
-| Total Hits Count |521 |18701 | |
-| | | | |
-| DS14 |Ingestion |38769 |12835 |
-| Count By Rating |528 |19239 | |
-| Count Over Time |528 |19239 | |
-| Hits By Country |528 |19238 | |
-| Top 15 Organizations |527 |19240 | |
-| Unique Count Organizations |524 |19234 | |
-| Unique IP Count |524 |19234 | |
-| Total Hits Count |527 |19236 | |
-
-You can see that although the ingestion rate when caching was enabled was approximately 1/3 of that when caching was disabled, the number of queries performed increased by a factor of 34. Queries no longer incur as much disk I/O and did not have to compete for disk resources. This is reflected by the graphs in the figure below that compare the I/O activity for all four cases:
-
-![Disk activity graph](./images/query-performance13.png)
-
-***Disk I/O activity for the ingestion and query test with index caching disabled and enabled***
-
-The decrease in disk I/O also meant that the CPU spent less time waiting for I/O to complete. This is highlighted by the next figure:
-
-![Disk wait times graph](./images/query-performance14.png)
-
-***CPU time spent waiting for disk I/O to complete for the ingestion and query test with index caching disabled and enabled***
-
-The reduction in disk I/O meant that Elasticsearch could spend a much greater proportion of its time servicing queries from data held in memory. This increased CPU utilization, which becomes apparent if you look at the CPU utilization for all four cases. The graphs below show how CPU use was more sustained with caching enabled:
-
-![CPU utilization graph](./images/query-performance15.png)
-
-***CPU utilization for the ingestion and query test with index caching disabled and enabled***
-
-The volume of network I/O in both scenarios for the duration of the tests was broadly similar. The tests without caching showed a gradual degradation during the test period, but the longer, 24 hour runs of these tests showed that this statistic levelled off at approximately 2.75GBps. The image below shows this data for the DS4 clusters (the data for the DS14 clusters was very similar):
-
-![Network traffic graph](./images/query-performance16.png)
-
-***Network traffic volumes for the ingestion and query test with index caching disabled and enabled***
-
-As described in the [scaling up](#performance-results---scaling-up) test, the restrictions to network bandwidth with Azure VMs are not published and can vary, but the moderate levels of CPU and disk activity suggests that network utilization may be the limiting factor in this scenario.
-
-Caching is more naturally suited to scenarios where data changes infrequently. To highlight the impact of caching in this scenario, the *query-only* tests were performed with caching enabled. The results are shown below (these tests ran for 90 minute, and the indexes under test contained 100 million documents):
-
-| Cluster | Query | Average response time (ms) | Number of Queries Performed |
-| --- | --- | --- | --- |
-| |**Cache disabled** |**Cache enabled** | |
-| DS4 |Count By Rating |4489 |210 |
-| Count Over Time |7292 |211 | |
-| Hits By Country |7564 |231 | |
-| Top 15 Organizations |5066 |211 | |
-| Unique Count Organizations |5231 |211 | |
-| Unique IP Count |9228 |218 | |
-| Total Hits Count |2180 |210 | |
-| | | | |
-| DS14 |Count By Rating |1927 |211 |
-| Count Over Time |4483 |219 | |
-| Hits By Country |4761 |236 | |
-| Top 15 Organizations |2117 |212 | |
-| Unique Count Organizations |2393 |212 | |
-| Unique IP Count |7159 |220 | |
-| Total Hits Count |642 |211 | |
-
-The variance in the performance of the noncached tests is due to the difference in resources available between the DS4 and DS14 VMs. In both cases of the cached test the average response time dropped significantly as data was being retrieved directly from memory. It is also worth noting that the response times for the cached DS4 and DS14 cluster tests were very similar despite the disparity with the noncached results. There is also very little difference between the response times for each query within each test, they all take approximately 220ms. The disk I/O rates and CPU utilization for both clusters were very low as once all the data is in memory little I/O or processing is required. The network I/O rate was similar to that of the uncached tests, confirming that network bandwidth may be a limiting factor in this test. The following graphs present this information for the DS4 cluster. The profile of the DS14 cluster was very similar:
-
-![DS14 cluster performance graphs](./images/query-performance17.png)
-
-***Disk I/O, CPU utilization, and network utilization for the query-only test with index caching enabled***
-
-The figures in the table above suggest that using the DS14 architecture shows little benefit over using the DS4. In fact, the number of samples generated by the DS14 cluster was about 5% below that of the DS4 cluster, but this could also be due to network restrictions which can vary slightly over time.
-
-## Performance results - number of shards
-The purpose of this test was to determine whether the number of shards created for an index has any bearing on the query performance of that index.
-
-Separate tests conducted previously showed that the shard configuration of an index can impact the rate of data ingestion. The tests conducted to determine query performance followed a similar methodology, but were restricted to a 6-node cluster running on DS14 hardware. This approach helps to minimize the number of variables, so any differences in performance should be due to the volume of shards.
-
-The *query-only* test was conducted on copies of the same index configured with 7, 13, 23, 37, and 61 primary shards. The index contained 100 million documents and had a single replica, doubling the number of shards across the cluster. Each test ran for 90 minutes. The following table summarizes the results. The average response time shown is the response time for the JMeter test transaction that encompasses the complete set of queries performed by each iteration of the test. See the note in the section [Performance results - scaling up](#performance-results---scaling-up) for more information:
-
-| Number of shards | Shard layout (shards per node, including replicas) | Number of queries performed | Avg response time (ms) |
-| --- | --- | --- | --- |
-| 7 (14 including replicas) |3-2-2-2-2-3 |7461 |40524 |
-| 13 (26) |5-4-5-4-4-4 |7369 |41055 |
-| 23 (46) |7-8-8-7-8-8 |14193 |21283 |
-| 37 (74) |13-12-12-13-12-12 |13399 |22506 |
-| 61 (122) |20-21-20-20-21-20 |14743 |20445 |
-
-These results indicate that there is a significant difference in performance between the 13(26) shard cluster and the 23,(46) shard cluster, throughput nearly doubles and response times halve. This is most likely due to the configuration of the VMs and the structures that Elasticsearch uses to process search requests. Search requests are queued, and each search request is handled by a single search thread. The number of search threads created by an Elasticsearch node is a function of the number of processors available on the machine hosting the node. The results suggest that with only 4 or 5 shards on a node, processing resources are not being fully utilized. This is supported by looking at the CPU utilization while running this test. The following image is a snapshot taken from Marvel while performing the 13(26) shard test:
-
-![CPU utilization](./images/query-performance18.png)
-
-***CPU utilization for the query-only test on the 7(14) shard cluster***
-
-Compare these figures with those of the 23(46) shard test:
-
-![CPU utilization](./images/query-performance19.png)
-
-***CPU utilization for the query-only test on the 23(46) shard cluster***
-
-In the 23(46) shard test, CPU utilization was far higher. Each node contains 7 or 8 shards. The DS14 architecture provides 16 processors, and Elasticsearch is better able to exploit this number of cores with the additional shards. The figures in the table above suggest that increasing the number of shards beyond this point may improve performance slightly, but you should offset these figures against the additional overhead of maintaining a high volume of shards. These tests imply that the optimal number of shards per node is half the number of processor cores available on each node. However, remember that these results were achieved when only running queries. If your system imports data, you should also consider how sharding can impact the performance of data ingestion operations.
-
-## Summary
-Elasticsearch provides many options that you can use to structure indexes and tune them to support large-scale query operations. This document has summarized some common configurations and techniques that you can use to tune your database for query purposes. However, you should recognize that there is a tradeoff between optimizing a database to support fast retrieval as opposed to supporting high-volume data ingestion. Sometimes what is good for querying can have a detrimental impact on insert operations and vice versa. In a system that is exposed to mixed workloads, you need to assess where the balance lies and adjust the system parameters accordingly.
-
-Additionally, the applicability of different configurations and techniques can vary depending on the structure of the data and the limitations (or otherwise) of the hardware the system is constructed on. Many of the tests shown in this document illustrate how the selection of the hardware platform can impact throughput, and also how some strategies can be beneficial in certain cases but detrimental in others. The important point is to understand the options available and then perform rigorous benchmarking using your own data to determine the most optimal combination.
-
-Finally, remember that an Elasticsearch database is not necessarily a static item. It will likely grow over time, and the strategies used to structure the data may need to be revised regularly. For example, it may be necessary to scale up, scale out, or reindex data with additional shards. As the system increases in size and complexity, be prepared to continually test performance to ensure that you are still meeting any SLAs guaranteed to your customers.
-
-## Appendix: the query and aggregation performance test
-This appendix describes the performance test performed against the Elasticsearch cluster. The tests were run by using JMeter running on a separate set of VMs. Details the configuration of the test environment are described in [Creating a performance testing environment for Elasticsearch on Azure](./performance-testing-environment.md). To perform your own testing, you can create your own JMeter test plan manually following the guidance in this appendix, or you can use the automated test scripts available separately. For more information, see [Running the automated Elasticsearch performance tests](./automated-performance-tests.md).
-
-The data query workload performed the set of queries described below while performing a large-scale upload of documents at the same time. The purpose of this workload was to simulate a production environment where new data is constantly being added while searches are performed. The queries were structured to retrieve only the most recent data from documents added in the last 15 minutes.
-
-Each document was stored in a single index named *idx*, and had the type *doc*. You can use the following HTTP request to create the index. The *number_of_replicas* and *number_of_shards* settings varied from the values shown below in many of the tests. Additionally, for the tests that used fielddata rather than doc values, each property was annotated with the attribute *"doc_values" : false*.
-
-**Important**: The index was dropped and recreated prior to each test run.
-
-``` http
-PUT /idx
-{
- "settings" : {
- "number_of_replicas": 1,
- "refresh_interval": "30s",
- "number_of_shards": "5",
- "index.translog.durability": "async"
- },
- "doc": {
- "mappings": {
- "event": {
- "_all": {
- "enabled": false
- },
- "_timestamp": {
- "enabled": true,
- "store": true,
- "format": "date_time"
- },
- "properties": {
- "Organization": {
- "type": "string",
- "index": "not_analyzed"
- },
- "CustomField1": {
- "type": "string",
- "index": "not_analyzed"
- },
- "CustomField2": {
- "type": "string",
- "index": "not_analyzed"
- },
- "CustomField3": {
- "type": "string",
- "index": "not_analyzed"
- },
- "CustomField4": {
- "type": "string",
- "index": "not_analyzed"
- },
- "CustomField5": {
- "type": "string",
- "index": "not_analyzed"
- },
- "DateTimeReceivedUtc": {
- "type": "date",
- "format": "dateOptionalTime"
- },
- "Host": {
- "type": "string",
- "index": "not_analyzed"
- },
- "HttpMethod": {
- "type": "string",
- "index": "not_analyzed"
- },
- "HttpReferrer": {
- "type": "string",
- "index": "not_analyzed"
- },
- "HttpRequest": {
- "type": "string",
- "index": "not_analyzed"
- },
- "HttpUserAgent": {
- "type": "string",
- "index": "not_analyzed"
- },
- "HttpVersion": {
- "type": "string",
- "index": "not_analyzed"
- },
- "OrganizationName": {
- "type": "string",
- "index": "not_analyzed"
- },
- "SourceIp": {
- "type": "ip"
- },
- "SourceIpAreaCode": {
- "type": "long"
- },
- "SourceIpAsnNr": {
- "type": "string",
- "index": "not_analyzed"
- },
- "SourceIpBase10": {
- "type": "long"
- },
- "SourceIpCity": {
- "type": "string",
- "index": "not_analyzed"
- },
- "SourceIpCountryCode": {
- "type": "string",
- "index": "not_analyzed"
- },
- "SourceIpLatitude": {
- "type": "double"
- },
- "SourceIpLongitude": {
- "type": "double"
- },
- "SourceIpMetroCode": {
- "type": "long"
- },
- "SourceIpPostalCode": {
- "type": "string",
- "index": "not_analyzed"
- },
- "SourceIpRegion": {
- "type": "string",
- "index": "not_analyzed"
- },
- "SourceLatLong": {
- "type": "geo_point",
- "doc_values": true,
- "lat_lon": true,
- "geohash": true
- },
- "SourcePort": {
- "type": "string",
- "index": "not_analyzed"
- },
- "SourcedFrom": {
- "type": "string",
- "index": "not_analyzed"
- },
- "TargetIp": {
- "type": "ip"
- },
- "TargetPort": {
- "type": "string",
- "index": "not_analyzed"
- },
- "Rating": {
- "type": "string",
- "index": "not_analyzed"
- },
- "UseHumanReadableDateTimes": {
- "type": "boolean"
- }
- }
- }
- }
- }
-}
-```
-
-The following queries were performed by the test:
-
-* How many documents with each Rating value have been entered in the last 15 minutes?
-
- ```http
- GET /idx/doc/_search
- {
- "query": {
- "bool": {
- "must": [
- {
- "range": {
- "DateTimeReceivedUtc": {
- "gte": "now-15m",
- "lte": "now"
- }
- }
- }
- ],
- "must_not": [],
- "should": []
- }
- },
- "from": 0,
- "size": 0,
- "aggs": {
- "2": {
- "terms": {
- "field": "Rating",
- "size": 5,
- "order": {
- "_count": "desc"
- }
- }
- }
- }
- }
- ```
-* How many documents have been added in each 5 minute interval during the last 15 minutes?
-
- ```http
- GET /idx/doc/_search
- {
- "query": {
- "bool": {
- "must": [
- {
- "range": {
- "DateTimeReceivedUtc": {
- "gte": "now-15m",
- "lte": "now"
- }
- }
- }
- ],
- "must_not": [],
- "should": []
- }
- },
- "from": 0,
- "size": 0,
- "sort": [],
- "aggs": {
- "2": {
- "date_histogram": {
- "field": "DateTimeReceivedUtc",
- "interval": "5m",
- "time_zone": "America/Los_Angeles",
- "min_doc_count": 1,
- "extended_bounds": {
- "min": "now-15m",
- "max": "now"
- }
- }
- }
- }
- }
- ```
-* How many documents of each Rating value have been added for each country in the last 15 minutes?
-
- ```HTTP
- GET /idx/doc/_search
- {
- "query": {
- "filtered": {
- "query": {
- "query_string": {
- "query": "*",
- "analyze_wildcard": true
- }
- },
- "filter": {
- "bool": {
- "must": [
- {
- "query": {
- "query_string": {
- "query": "*",
- "analyze_wildcard": true
- }
- }
- },
- {
- "range": {
- "DateTimeReceivedUtc": {
- "gte": "now-15m",
- "lte": "now"
- }
- }
- }
- ],
- "must_not": []
- }
- }
- }
- },
- "size": 0,
- "aggs": {
- "2": {
- "terms": {
- "field": "Rating",
- "size": 5,
- "order": {
- "_count": "desc"
- }
- },
- "aggs": {
- "3": {
- "terms": {
- "field": "SourceIpCountryCode",
- "size": 15,
- "order": {
- "_count": "desc"
- }
- }
- }
- }
- }
- }
- }
- ```
-* Which 15 organizations occur most frequently in documents added in the last 15 minutes?
-
- ```http
- GET /idx/doc/_search
- {
- "query": {
- "filtered": {
- "query": {
- "query_string": {
- "query": "*",
- "analyze_wildcard": true
- }
- },
- "filter": {
- "bool": {
- "must": [
- {
- "query": {
- "query_string": {
- "query": "*",
- "analyze_wildcard": true
- }
- }
- },
- {
- "range": {
- "DateTimeReceivedUtc": {
- "gte": "now-15m",
- "lte": "now"
- }
- }
- }
- ],
- "must_not": []
- }
- }
- }
- },
- "size": 0,
- "aggs": {
- "2": {
- "terms": {
- "field": "Organization",
- "size": 15,
- "order": {
- "_count": "desc"
- }
- }
- }
- }
- }
- ```
-* How many different organizations occur in documents added in the last 15 minutes?
-
- ```http
- GET /idx/doc/_search
- {
- "query": {
- "filtered": {
- "query": {
- "query_string": {
- "query": "*",
- "analyze_wildcard": true
- }
- },
- "filter": {
- "bool": {
- "must": [
- {
- "query": {
- "query_string": {
- "query": "*",
- "analyze_wildcard": true
- }
- }
- },
- {
- "range": {
- "DateTimeReceivedUtc": {
- "gte": "now-15m",
- "lte": "now"
- }
- }
- }
- ],
- "must_not": []
- }
- }
- }
- },
- "size": 0,
- "aggs": {
- "2": {
- "cardinality": {
- "field": "Organization"
- }
- }
- }
- }
- ```
-* How many documents have been added in the last 15 minutes?
-
- ```http
- GET /idx/doc/_search
- {
- "query": {
- "filtered": {
- "query": {
- "query_string": {
- "query": "*",
- "analyze_wildcard": true
- }
- },
- "filter": {
- "bool": {
- "must": [
- {
- "query": {
- "query_string": {
- "analyze_wildcard": true,
- "query": "*"
- }
- }
- },
- {
- "range": {
- "DateTimeReceivedUtc": {
- "gte": "now-15m",
- "lte": "now"
- }
- }
- }
- ],
- "must_not": []
- }
- }
- }
- },
- "size": 0,
- "aggs": {}
- }
- ```
-* How many different SourceIp values occur in documents added in the last 15 minutes?
-
- ```http
- GET /idx/doc/_search
- {
- "query": {
- "filtered": {
- "query": {
- "query_string": {
- "query": "*",
- "analyze_wildcard": true
- }
- },
- "filter": {
- "bool": {
- "must": [
- {
- "query": {
- "query_string": {
- "query": "*",
- "analyze_wildcard": true
- }
- }
- },
- {
- "range": {
- "DateTimeReceivedUtc": {
- "gte": "now-15m",
- "lte": "now"
- }
- }
- }
- ],
- "must_not": []
- }
- }
- }
- },
- "size": 0,
- "aggs": {
- "2": {
- "cardinality": {
- "field": "SourceIp"
- }
- }
- }
- }
- ```
-
diff --git a/docs/elasticsearch/data-ingestion-performance.md b/docs/elasticsearch/data-ingestion-performance.md
deleted file mode 100644
index ddc7445bdc7..00000000000
--- a/docs/elasticsearch/data-ingestion-performance.md
+++ /dev/null
@@ -1,1186 +0,0 @@
----
-title: Tune data ingestion performance for Elasticsearch on Azure
-description: How to maximize data ingestion performance with Elasticsearch on Azure.
-author: dragon119
-ms.date: 09/22/2016
-
-pnp.series.title: Elasticsearch on Azure
-pnp.series.prev: index
-pnp.series.next: data-aggregation-and-query-performance
----
-# Tune data ingestion performance
-[!INCLUDE [header](../_includes/header.md)]
-
-## Overview
-An important aspect when creating any search database is to determine the best way to structure the system to ingest searchable data quickly and efficiently. The considerations surrounding this requirement
-concern not only the choice of infrastructure on which you implement the system, but also the various
-optimizations that you can use to help ensure that the system can keep up with the expected levels of
-data influx.
-
-This document describes the deployment and configuration options that you should consider
-for implementing an Elasticsearch cluster that expects a high rate of data ingestion. To provide solid
-data for illustrative purposes, this document also shows the results of benchmarking various
-configurations using a simple high-volume data ingestion workload. The details of the workload are
-described in the [Appendix](#appendix-the-bulk-load-data-ingestion-performance-test) at the end of this
-document.
-
-The purpose of the benchmarks was not to generate absolute performance figures for running Elasticsearch
-or even to recommend a particular topology, but rather to illustrate methods that you can use for
-assessing performance, sizing data nodes, and implementing clusters that can meet your own performance
-requirements.
-
-When sizing your own systems, it is important to test performance thoroughly based on your
-own workloads. Gather telemetry that enables you to obtain information about the optimal hardware
-configuration to use, and the horizontal scaling factors that you should consider. In particular, you
-should:
-
-* Consider the overall size of the payload sent and not just the number of items in each bulk insert
- request. A smaller number of large bulk items in each request could be more optimal than a larger number,
- depending on the resource available to process each request.
-
-You can monitor the effects of varying the bulk insert request by using [Marvel](https://www.elastic.co/products/marvel), using the
-*readbytes*/*writebytes* I/O counters with [JMeter](https://jmeter.apache.org/), and operating system tools such as *iostat* and
-*vmstat* on Ubuntu.
-
-* Conduct performance testing and gather telemetry to measure CPU processing and I/O wait times, disk
- latency, throughput, and response times. This information can help to identify potential bottlenecks and
- assess the costs and benefits of using premium storage. Bear in mind that CPU and disk utilization might
- not be even across all nodes depending on the way in which shards and replicas are distributed across the
- cluster (some nodes can contain more shards than others).
-* Consider how the number of concurrent requests for your workload will be distributed across the cluster
- and assess the impact of using different numbers of nodes to handle this workload.
-* Consider how workloads might grow as the business expands. Assess the impact of this growth on the
- costs of the VMs and storage used by the nodes.
-* Recognize that using a cluster with a larger number of nodes with regular disks might be more
- economical if your scenario requires a high number of requests and the disk infrastructure maintains
- throughput that satisfies your service level agreements (SLAS). However, increasing the number of nodes can introduce overhead in
- the form of additional inter-node communications and synchronization.
-* Understand that a higher number of cores per node may generate more disk traffic as more documents can
- be processed. In this case, measure disk utilization to assess whether the I/O subsystem might become a
- bottleneck and determine the benefits of using premium storage.
-* Test and analyze the trade-offs with a higher number of nodes with fewer cores versus fewer nodes with
- more cores. Keep in mind that the increasing the number of replicas escalates the demands on the cluster
- and may require you to add nodes.
-* Consider that using ephemeral disks might require that indexes have to be recovered more frequently.
-* Measure storage volume usage to assess capacity and under utilization of storage. For example, in our
- scenario we stored 1.5 billion documents using 350GB storage.
-* Measure the transfer rates for your workloads and consider how close you are likely to get to the total
- I/O rate transfer limit for any given storage account in which you have created virtual disks.
-
-## Node and index design
-In a system that must support large-scale data ingestion, you ask the following questions:
-
-* **Is the data fast-moving or relatively static?** The more dynamic the data, the greater the
- maintenance overhead for Elasticsearch. If the data is replicated, each replica is maintained
- synchronously. Fast-moving data that has only a limited lifespan or that can easily be reconstructed
- might benefit from disabling replication altogether. This option is discussed in the section
- [Tuning large-scale data ingestion.](#tuning-large-scale-data-ingestion)
-* **How up-to-date do you require the data discovered by searching to be?** To maintain performance,
- Elasticsearch buffers as much data in memory as it can. This means that not all changes are immediately
- available for search requests. The process Elasticsearch uses to persist changes and make them visible
- is described in [Making Changes Persistent](https://www.elastic.co/guide/en/elasticsearch/guide/current/translog.html#translog).
-
- The rate data becomes visible is governed by the *refresh\_interval* setting of the relevant
- index. By default, this interval is set at 1 second. However, not every situation requires refreshes to
- occur this quickly. For example, indexes recording log data might need to cope with a rapid and continual
- influx of information which needs to be ingested quickly, but does not require the information to be
- immediately available for querying. In this case, consider reducing the frequency of refreshes. This
- feature is also described in the section [Tuning large-scale data ingestion.](#tuning-large-scale-data-ingestion)
-* **How quickly is the data likely to grow?** Index capacity is determined by the number of shards
- specified when the index is created. To allow for growth, specify an adequate number of shards (the
- default is five). If the index is initially created on a single node, all five shards will be located on
- that node, but as the volume of data grows additional nodes can be added and Elasticsearch will
- dynamically distribute shards across nodes. However, each shard has an overhead. All searches in an index
- will query all shards, so creating a large number of shards for a small amount of data can slow data
- retrievals (avoid the [Kagillion shards](https://www.elastic.co/guide/en/elasticsearch/guide/current/kagillion-shards.html) scenario).
-
- Some workloads (such as logging) might create a new index each day, and if you observe that the number of
- shards is insufficient for the volume of data, you should change it prior to creating the next index
- (existing indexes will be unaffected). If you must distribute existing data across more shards, then one
- option is to reindex the information. Create a new index with the appropriate configuration and copy the
- data into it. This process can be made transparent to applications by using [index aliases](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-aliases.html).
-* **Does data need to be partitioned between users in a multitenancy scenario?** You can create separate
- indexes for each user, but this can be expensive if each user only has a moderate amount of data. Instead,
- consider creating [shared indexes](https://www.elastic.co/guide/en/elasticsearch/guide/current/shared-index.html) and use [aliases based on filters](https://www.elastic.co/guide/en/elasticsearch/guide/current/faking-it.html) to direct requests
- to the per-user data. To keep the data for a user together in the same shard, override the default
- routing configuration for the index and route data based on some identifying attribute of the user.
-* **Is data long or short-lived?** If you are using a set of Azure VMs to implement an Elasticsearch
- cluster, you can store ephemeral data on a local resource system disk rather than an attached drive.
- Using a VM SKU that utilizes an SSD for the resource disk can improve I/O performance. However, any
- information held on the resource disk is temporary and may be lost if the VM restarts (see the section
- When Will the Data on a Temporary Drive Be Lost in [Understanding the temporary drive on Microsoft Azure Virtual Machines](http://blogs.msdn.com/b/mast/archive/2013/12/07/understanding-the-temporary-drive-on-windows-azure-virtual-machines.aspx) for more details). If you need to retain data between restarts, create data disks to hold this information and attach them to the VM.
-* **How active is the data?** Azure VHDs are subject to throttling if the amount of read/write activity
- exceeds specified parameters (currently 500 input/output operations per second (IOPS) for a disk attached to a Standard Tier VM, and 5000 IOPS
- for a Premium Storage disk).
-
- To reduce the chances of throttling and increase I/O performance, consider creating multiple data disks
- for each VM and configure Elasticsearch to stripe data across these disks as described in the [Disk and file system requirements](index.md#disk-and-file-system-requirements).
-
- You should select a hardware configuration that helps to minimize the number of disk I/O
- read operations by ensuring that sufficient memory is available to cache frequently accessed data. This
- is described in the [Memory requirements](index.md#memory-requirements) section of Running Elasticsearch on Azure.
-* **What type of workload will each node need to support?** Elasticsearch benefits from having memory
- available to cache data in (in the form of the file system cache) and for the JVM heap as described
- in the [Memory requirements](index.md#memory-requirements) section of Running Elasticsearch on Azure.
-
- The amount of memory, number of CPU cores, and quantity of available disks are set by the
- SKU of the virtual machine. For more information, see [Virtual Machines Pricing](http://azure.microsoft.com/pricing/details/virtual-machines/) on the Azure website.
-
-### Virtual machine options
-You can provision VMs in Azure using a number of different SKUs. The resources available to an Azure VM
-depend on SKU selected. Each SKU offers a different mix of cores, memory, and storage. You need to select
-an appropriate size of VM that will handle the expected workload but that will also prove cost-effective.
-Start with a configuration that will meet your current requirements (perform benchmarking to test, as
-described later in this document). You can scale a cluster later by adding more VMs running Elasticsearch
-nodes.
-
-[Sizes for Virtual Machines](/azure/virtual-machines/virtual-machines-linux-sizes/?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) on the Azure website documents the
-various options and SKUs available for VMs.
-
-You should match the size and resources of a VM to the role that nodes running on the VM will perform.
-
-For a data node:
-
-* Allocate up to 30 GB or 50% of the available RAM memory to the Java heap, whichever is the lower. Leave
- the remainder to the operating system to use for caching files. If you are using Linux, you can specify
- the amount of memory to allocate to the Java heap by setting the ES\_HEAP\_SIZE environment variable
- before running Elasticsearch. Alternatively, if you are using Windows or Linux, you can stipulate memory
- size with the *Xmx* and *Xms* parameters when you start Elasticsearch.
-
- Depending on the workload, fewer large VMs may not be as effective for performance as using a larger
- number of moderately sized VMs. You should conduct tests that can measure the tradeoffs between the
- additional network traffic and maintenance involved versus the costs of increasing the number of cores
- available and the reduced disk contention on each node.
-* Use premium storage for storing Elasticsearch data. This is discussed in more detail in the [Storage options](#storage-options) section.
-* Use multiple disks of the same size and stripe data across these disks. The SKU of your VMs will dictate the maximum number of data disks that you can attach. For more information, see [Disk and file system requirements](index.md#disk-and-file-system-requirements).
-* Use a multi-core CPU SKU with at least 2 cores, preferably 4 or more.
-
-For a client node:
-
-* Do not allocate disk storage for Elasticsearch data, dedicated clients do not store data on disk.
-* Ensure that adequate memory is available to handle workloads. Bulk insert requests are read into memory
- prior to the data being sent to the various data nodes, and the results of aggregations and queries are
- accumulated in memory before being returned to the client application. Benchmark your own workloads and
- monitor memory use by using a tool such as Marvel or the [JVM information](https://www.elastic.co/guide/en/elasticsearch/guide/current/_monitoring_individual_nodes.html#_jvm_section) returned by using the *node/stats* API (`GET _nodes/stats`) to assess the optimal requirements. In particular, monitor the *heap\_used\_percent* metric for each node and aim to keep the
- heap size below 75% of the space available.
-* Ensure that sufficient CPU cores are available to receive and process the expected volume of requests.
- Requests are queued as they are received prior to processing, and the volume of items that can be queued
- is a function of the number of CPU cores on each node. You can monitor the queue lengths by using the
- data in the [Threadpool information](https://www.elastic.co/guide/en/elasticsearch/guide/current/_monitoring_individual_nodes.html#_threadpool_section) returned by using the node/stats API.
-
- If the *rejected* count for a queue indicates that requests are being refused, this indicates
- that the cluster is starting to bottleneck. This may be due to CPU bandwidth, but may also be due to
- other factors such as lack of memory or slow I/O performance, so use this information in conjunction
- with other statistics to help determine the root cause.
-
- Client nodes may or may not be necessary, depending on your workloads. Data ingestion workloads tend
- not to benefit from using dedicated clients, whereas some searches and aggregations can run more
- quickly. Be prepared to benchmark your own scenarios.
-
- Client nodes are primarily useful for applications that use the Transport Client API to connect to
- the cluster. You can also use the Node Client API, which dynamically creates a dedicated client for
- the application, using the resources of the application host environment. If your applications use
- the Node Client API, then it may not be necessary for your cluster to contain preconfigured dedicated
- client nodes.
-
- However, be aware that a node created using the Client Node API is a first-class member
- of the cluster and as such participates in the network chatter with other nodes. Frequently starting
- and stopping client nodes can create unnecessary noise across the entire cluster.
-
-For a master node:
-
-* Do not allocate disk storage for Elasticsearch data, dedicated master nodes do not store data on disk.
-* CPU requirements should be minimal.
-* Memory requirements depend on the size of the cluster. Information about the state of the cluster is
- retained in memory. For small clusters the amount of memory required is minimal, but for a large, highly
- active cluster where indexes are being created frequently and shards moving around, the amount of state
- information can grow significantly. Monitor the JVM heap size to determine whether you need to add more
- memory.
-
-> [!NOTE]
-> For cluster reliability, always create multiple master nodes to and configure the
-> remaining nodes to avoid the possibility of a split brain occurring. Ideally, there should be an
-> odd number of master nodes. This topic is described in more detail in [Configuring resilience and recovery on Elasticsearch on Azure][Configuring resilience and recovery on Elasticsearch on Azure].
->
->
-
-### Storage options
-There are a number of storage options available on Azure VMs with different trade-offs affecting cost, performance, availability, and recovery that you need to consider carefully.
-
-Note that you should store Elasticsearch data on dedicated data disks. This will help to reduce
-contention with the operating system and ensure that large volumes of Elasticsearch I/O do not compete
-with operating system functions for I/O resources.
-
-Azure disks are subject to performance constraints. If you find that a cluster undergoes periodic bursts
-of activity then I/O requests may be throttled. To help prevent this, tune your design to balance the
-document size in Elasticsearch against the volume of requests likely to be received by each disk.
-
-Disks based on standard storage support a maximum request rate of 500 IOPS whereas disks based on premium
-storage can operate at up to 5,000 IOPS, depending on the size of the data disks. Premium storage disks are only available for the DS and GS
-series of VMs. To find the maximum disk IOPS by VM size, see [Sizes for virtual machines in Azure](/azure/virtual-machines/virtual-machines-linux-sizes/?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json). Performance on premium storage is determined by the VM size together with disk size allocation. For more information, see [Premium Storage: High-Performance Storage for Azure Virtual Machine Workloads](/azure/storage/storage-premium-storage/).
-
-**Persistent data disks**
-
-Persistent data disks are VHDs that are backed by Azure Storage. If the VM needs to be recreated after a
-major failure, existing VHDs can be easily attached to the new VM. VHDs can be created based on standard
-storage (spinning media) or premium storage (SSDs). If you wish to use SSDs you must create VMs using the
-DS series or better. DS machines cost the same as the equivalent D-series VMs, but you are charged extra
-for using premium storage.
-
-In cases where the maximum transfer rate per disk is insufficient to support the expected workload,
-consider either creating multiple data disks and allow Elasticsearch to [stripe data across these disks](index.md#disk-and-file-system-requirements), or implement system level [RAID 0 striping using virtual disks](/azure/virtual-machines/virtual-machines-linux-configure-raid/?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json).
-
-> [!NOTE]
-> Experience within Microsoft has shown that using RAID 0 is particularly beneficial for
-> smoothing out the I/O effects of *spiky* workloads that generate frequent bursts of activity.
->
->
-
-Use premium locally redundant (or locally redundant for low-end or QA workloads) storage for the
-storage account holding the disks; replicating across geographies and zones is not required for
-Elasticsearch high availability.
-
-**Ephemeral disks**
-
-Using persistent disks based on SSDs requires creating VMs that support premium storage. This has a price
-implication. Using the local ephemeral disk to hold Elasticsearch data can be a cost effective solution
-for moderately sized nodes requiring up to approximately 800GB of storage. On the Standard-D series of
-VMs, ephemeral disks are implemented using SSDs which provide far greater performance and much lower
-latency than ordinary disks
-
-When using Elasticsearch, the performance can be equivalent to using premium storage without incurring
-the cost – see the section [Addressing disk latency issues](#addressing-disk-latency-issues) for more
-information.
-
-The size of the VM limits the amount of space available in ephemeral storage as described in the blog post
-[D-Series Performance Expectations](https://azure.microsoft.com/blog/d-series-performance-expectations/).
-
-For example, a Standard\_D1 VM provides 50GB of ephemeral storage, a Standard\_D2 VM has 100GB of ephemeral storage, and a Standard\_D14 VM provides 800GB of ephemeral space. For clusters where nodes only require this amount of space, using a D-series VM with ephemeral storage can be cost effective.
-
-You must balance the increased throughput available with ephemeral storage against the time and costs
-involved in recovering this data after a machine restart. The contents of the ephemeral disk are lost if
-the VM is moved to a different host server, if the host is updated, or if the host experiences a hardware
-failure. If the data itself has a limited lifespan then this data loss might be tolerable. For
-longer-lived data, it may be possible to rebuild an index or recover the missing information from a
-backup. It is possible to minimize the potential for loss by using replicas held on other VMs.
-
-> [!NOTE]
-> Do not use a **single** VM to hold critical production data. If the node fails,
-> all of the data is unavailable. For critical information, ensure that data is replicated on at least
-> one other node.
->
->
-
-**Azure files**
-
-The [Azure File Service](http://blogs.msdn.com/b/windowsazurestorage/archive/2014/05/12/introducing-microsoft-azure-file-service.aspx) provides shared file access using Azure Storage. You can create file shares that you can then mount on Azure VMs. Multiple VMs can mount the same file share, enabling them to access the same data.
-
-For performance reasons, it is not recommended that you use file shares for holding Elasticsearch data that does not need to be shared across nodes, regular data disks are more suited to this purpose. File shares can be used for creating Elasticsearch [shadow replica indexes](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-shadow-replicas.html). However, this feature is currently experimental and should not be implemented in a production environment at this time. For this reason, shadow indexes are not considered further in this guidance.
-
-**Network options**
-
-Azure implements a shared networking scheme. VMs utilizing the same hardware racks compete for network resources. Therefore available network bandwidth can vary according to the time of day and the daily cycle of work running on VMs sharing the same physical network infrastructure. You have little control over these factors. It is important to understand that network performance is likely to fluctuate over time, so set user expectations accordingly.
-
-## Scaling up nodes to support large-scale data ingestion
-You can build Elasticsearch clusters using reasonably moderate hardware, and then scale up or scale out
-as the volume of data grows and the number of requests increases. With Azure, you scale-up by running on
-bigger and more expensive VMs, or you can scale-out by using additional smaller and cheaper VMs.
-
-You might also perform a combination of both strategies. There is no one-size-fits-all solution for all
-scenarios, so to assess the best approach for any given situation you need to be prepared to undertake a
-series performance tests.
-
-This section is concerned with the scale-up approach, scaling out is discussed in the section
-[Scaling out: conclusions](#scaling-out-conclusions).
-This section describes the results of a series of benchmarks that were performed against a set of
-Elasticsearch clusters containing VMs of varying sizes. The clusters were designated as small, medium,
-and large. The following table summarizes the resources allocated to the VMs in each cluster.
-
-| Cluster | VM SKU | Number of cores | Number of data disks | RAM |
-| --- | --- | --- | --- | --- |
-| Small |Standard D2 |2 |4 |7GB |
-| Medium |Standard D3 |4 |8 |14GB |
-| Large |Standard D4 |8 |16 |28GB |
-
-Each Elasticsearch cluster contained 3 data nodes. These data nodes handled client requests as well as
-handling data processing. Separate client nodes were not used because they offered little benefit to the
-data ingestion scenario used by the tests. The cluster also contained three master nodes, one of which
-was elected by Elasticsearch to coordinate the cluster.
-
-The tests were performed using Elasticsearch 1.7.3. The tests were initially performed on clusters
-running Ubuntu Linux 14.0.4, and then repeated using Windows Server 2012. The details of the workload
-performed by the tests are described in the [Appendix](#appendix-the-bulk-load-data-ingestion-performance-test).
-
-### Data ingestion performance – Ubuntu Linux 14.0.4
-The following table summarizes the overall results of running the tests for two hours for each configuration:
-
-| Configuration | Sample count | Average response time (ms) | Throughput (operations/s) |
-| --- | --- | --- | --- |
-| Small |67057 |636 |9.3 |
-| Medium |123482 |692 |17.2 |
-| Large |197085 |839 |27.4 |
-
-The throughput and number of samples processed for the three configurations are in the approximate ratio
-1:2:3. However, the resources available in terms of memory, CPU cores, and disks have the ratio 1:2:4. It
-was felt to be worth investigating the low-level performance details of the nodes in the cluster to
-assess why this might be the case. This information can help to determine whether there are limits to
-scaling up and when it may be better to consider scaling out.
-
-### Determining limiting factors: network utilization
-Elasticsearch is dependent on having sufficient network bandwidth to support the influx of client
-requests as well as the synchronization information that flows between nodes in the cluster. As
-highlighted earlier, you have limited control over the bandwidth availability, which depends on many
-variables such as the datacenter in use, and the current network load of other VMs sharing the same
-network infrastructure. However, it is still worth examining the network activity for each cluster to verify that the volume of traffic is not excessive. The graph below shows a comparison of the
-network traffic received by node 2 in each of the clusters (the volumes for the other nodes in each
-cluster was very similar).
-
-![Network activity graph](./images/data-ingestion-image1.png)
-
-The average bytes received per second for node 2 in each cluster configuration over the two hour period
-were as follows:
-
-| Configuration | Average number of bytes received/sec |
-| --- | --- |
-| Small |3993640.3 |
-| Medium |7311689.9 |
-| Large |11893874.2 |
-
-The tests were conducted while the system was running in **steady state**. In situations where index
-rebalancing or node recovering is occurring, data transmissions between nodes holding primary and replica
-shards can generate significant network traffic. The effects of this process are described more in the
-document [Configuring resilience and recovery on Elasticsearch on Azure][Configuring resilience and recovery on Elasticsearch on Azure].
-
-### Determining limiting factors: CPU utilization
-The rate at which requests are handled is at least partially governed by the available processing
-capacity. Elasticsearch accepts bulk insert requests on the bulk insert queue. Each node has a set of
-bulk insert queues determined by the number of available processors. By default, there is one queue for
-each processor and each queue can hold up to 50 outstanding requests before they will start to be
-rejected.
-
-Applications should send requests at a rate that does not cause the queues to overspill. The
-number of items in each queue at any one time is going to be a function of the rate at which requests are
-sent by client applications and the rate at which these same requests are retrieved and processed by
-Elasticsearch. For this reason, one important statistic captured concerns the error rate summarized in
-the following table.
-
-| Configuration | Total samples | Error count | Error rate |
-| --- | --- | --- | --- |
-| Small |67057 |0 |0.00% |
-| Medium |123483 |1 |0.0008% |
-| Large |200702 |3617 |1.8 % |
-
-Each of these errors was caused by the following Java exception:
-
-```
-org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase$1@75a30c1b]; ]
-[219]: index [systembase], type [logs], id [AVEAioKb2TRSNcPa_8YG], message [RemoteTransportException[[esdatavm2][inet[/10.0.1.5:9300]][indices:data/write/bulk[s]]]; nested: EsRejectedExecutionException[rejected execution (queue capacity 50)
-```
-
-Increasing the number of queues and/or the length of each queue might reduce the number of errors, but
-this approach can only cope with bursts of short duration. Doing this while running a sustained series of
-data ingestion tasks will simply delay the point at which errors start occurring. Furthermore, this
-change will not improve the throughput and will likely harm the response time of client applications as
-requests will be queued for longer before being processed.
-
-The default index structure of 5 shards with 1 replica (10 shards in all), results in a modest imbalance
-in load between the nodes in a cluster, two nodes will contain three shards while the other node will contain four. The busiest node is most likely to be the item that restricts throughput the most, that's why
-this node has been selected in each case.
-
-The following set of graphs illustrate the CPU utilization for the busiest node in each cluster.
-
-![CPU utilization graph](./images/data-ingestion-image2.png)
-
-![CPU utilization graph](./images/data-ingestion-image3.png)
-
-![CPU utilization graph](./images/data-ingestion-image4.png)
-
-For the small, medium, and large clusters, the average CPU utilization for these nodes was 75.01%,
-64.93%., and 64.64%. Rarely does utilization actually hit 100%, and utilization drops as the size of the
-nodes and the available CPU power available increases. CPU power is therefore unlikely to be a factor
-limiting the performance of the large cluster.
-
-### Determining limiting factors: memory
-Memory use is another important aspect that can influence performance. For the tests, Elasticsearch was allocated 50% of the available memory. This is in line with [documented recommendations](https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html#_give_half_your_memory_to_lucene). While the tests were running, the JVM was monitored for excess garbage collection activity (an indication of lack of heap memory). In all cases, the heap size was stable and the JVM exhibited low garbage collection activity. The screenshot below shows a snapshot of Marvel, highlighting the key JVM statistics for a short period while the test was running on the large cluster.
-
-![Marvel screenshot](./images/data-ingestion-image5.png)
-
-***JVM memory and garbage collection activity on the large cluster.***
-
-### Determining limiting factors: disk I/O rRates
-The remaining physical feature on the server side that might constrain performance is the performance of the disk I/O subsystem. The graph below compares the disk activity in terms of bytes written for the busiest nodes in each cluster.
-
-![Disk activity graph](./images/data-ingestion-image6.png)
-
-The following table shows the average bytes written per second for node 2 in each cluster configuration over the two hour period:
-
-| Configuration | Average number of bytes written/sec |
-| --- | --- |
-| Small |25502361.94 |
-| Medium |48856124.5 |
-| Large |88137675.46 |
-
-The volume of data written increases with the number of requests being processed by a cluster, but the I/O rates are within the limits of Azure storage (disks created by using Azure storage can support a sustained rates 10s to 100s of MB/s, depending on whether Standard or Premium storage is used). Examining the amount of time spent waiting for disk I/O helps to explain why the disk throughput is well below the theoretical maximum. The graphs and table below show these statistics for the same three nodes:
-
-> [!NOTE]
-> The disk wait time is measured by monitoring the percentage of CPU time during which
-> processors are blocked waiting for I/O operations to complete.
->
->
-
-![Disk wait times graph](./images/data-ingestion-image7.png)
-
-![Disk wait times graph](./images/data-ingestion-image8.png)
-
-![Disk wait times graph](./images/data-ingestion-image9.png)
-
-| Configuration | Average disk wait CPU time (%) |
-| --- | --- |
-| Small |21.04 |
-| Medium |14.48 |
-| Large |15.84 |
-
-This data indicates that a significant proportion of CPU time (between nearly 16% and 21%) is spent waiting for disk I/O to complete. This is restricting the ability of Elasticsearch to process requests and store data.
-
-During the test run, the large cluster inserted in excess of **five hundred million documents**. Allowing the test to continue showed that wait times increased significantly when the database contained over six hundred million documents. The reasons for this behavior were not fully investigated, but may be due to disk fragmentation causing increased disk latency.
-
-Increasing the size of the cluster over more nodes might help to alleviate the effects of this behavior. In extreme cases it may be necessary to defragment a disk that is showing excessive I/O times. However, defragmenting a large disk might take a considerable time (possibly more than 48 hours for a 2TB VHD drive), and simply reformatting the drive and allowing Elasticsearch to recover the missing data from replica shards could be a more cost-effective approach.
-
-### Addressing disk latency issues
-The tests were initially performed using VMs configured with standard disks. A standard disk is based on spinning media and as a result is subject to rotational latency and other bottlenecks that can constrain I/O rates. Azure provides also premium storage in which disks are created using SSD devices. These devices have no rotational latency and as a result should provide improved I/O speeds.
-
-The table below compares the results of replacing standard disks with premium disks in the large cluster (the Standard D4 VMs in the large cluster were replaced with Standard DS4 VMs; the number of cores, memory and disks was the same in both cases, the only difference being that the DS4 VMs used SSDs).
-
-| Configuration | Sample count | Average response time (ms) | Throughput (operations/s) |
-| --- | --- | --- | --- |
-| Large - Standard |197085 |839 |27.4 |
-| Large - Premium |255985 |581 |35.6 |
-
-Response times were noticeably better, resulting in an average throughput much closer to 4x that of the small cluster. This is more in line with the resources available on a Standard DS4 VM. Average CPU utilization on the busiest node in the cluster (node 1 in this case) increased as it spent less time waiting for I/O to complete:
-
-![CPU utilization graph](./images/data-ingestion-image10.png)
-
-The reduction in disk wait time becomes apparent when you consider the following graph, which shows that for the busiest node this statistic dropped to around 1% on average:
-
-![Disk wait times graph](./images/data-ingestion-image11.png)
-
-There is a price to pay for this improvement, however. The number of ingestion errors increased by a factor of 10 to 35797 (12.3%). Again, most of these errors were the result of the bulk insert queue overflowing. Given that the hardware now appears to be running close to capacity, it may be necessary to either add more nodes or throttle back the rate of bulk inserts to reduce the volume of errors. These issues are discussed later in this document.
-
-### Testing with ephemeral storage
-The same tests were repeated on a cluster of D4 VMs using ephemeral storage. On D4 VMs, ephemeral storage is implemented as a single 400GB SSD. The number of samples processed, response time, and throughput were all very similar to the figures reported for the cluster based on DS14 VMs with premium storage.
-
-| Configuration | Sample count | Average response time (ms) | Throughput (operations/s) |
-| --- | --- | --- | --- |
-| Large - Premium |255985 |581 |35.6 |
-| Large – Standard (Ephemeral disk) |255626 |585 |35.5 |
-
-The error rate was also similar (33862 failures out of 289488 requests in total – 11.7%).
-
-The following graphs show the CPU utilization and disk wait statistics for the busiest node in the cluster (node 2 this time):
-
-![CPU utilization graph](./images/data-ingestion-image12.png)
-
-![Disk wait times graph](./images/data-ingestion-image13.png)
-
-In this case, in performance terms alone, using ephemeral storage could be considered a more cost-effective solution than using premium storage.
-
-### Data ingestion performance – Windows Server 2012
-The same tests were repeated using a set of Elasticsearch clusters with nodes running Windows Server 2012. The purpose of these tests was to establish what effects, if any, the choice of operating system might have on cluster performance.
-
-To illustrate the scalability of Elasticsearch on Windows, the following table shows the throughput and response times achieved for the small, medium, and large cluster configurations. Note that these tests were all performed with Elasticsearch configured to use SSD ephemeral storage, as the tests with Ubuntu had shown that disk latency was likely to be a critical factor in achieving maximum performance:
-
-| Configuration | Sample count | Average response time (ms) | Throughput (operations/s) |
-| --- | --- | --- | --- |
-| Small |90295 |476 |12.5 |
-| Medium |169243 |508 |23.5 |
-| Large |257115 |613 |35.6 |
-
-These results indicate how Elasticsearch scales with VM size and available resources on Windows.
-
-The following tables compares the results for the large cluster on Ubuntu and Windows:
-
-| Operating system | Sample count | Average response time (ms) | Throughput (operations/s) | Error rate (%) |
-| --- | --- | --- | --- | --- |
-| Ubuntu |255626 |585 |35.5 |11.7 |
-| Windows |257115 |613 |35.6 |7.2 |
-
-The throughput was consistent with that for the large Ubuntu clusters, although the response time was slightly higher. This may be accounted for by the lower error rate (errors are reported more quickly than successful operations, so have a lower response time).
-
-The CPU utilization reported by the Windows monitoring tools was marginally higher than that of Ubuntu. However, you should treat direct comparisons of measurements such as these across operating systems with extreme caution due to the way different operating systems report these statistics. Additionally, information on disk latency in terms of CPU time spent waiting for I/O is not available in the same way as it is for Ubuntu. The important point is that CPU utilization was high, indicating that time spent waiting for I/O was low:
-
-![CPU utilization graph](./images/data-ingestion-image14.png)
-
-### Scaling up: conclusions
-Elasticsearch performance for a well-tuned cluster is likely to be equivalent on Windows and Ubuntu, and that it scales-up in a similar pattern on both operating systems. For best performance, **use premium storage for holding Elasticsearch data**.
-
-## Scaling out clusters to support large-scale data ingestion
-Scaling out is the complimentary approach to scaling up investigated in the previous section. An important feature of Elasticsearch is the inherent horizontal scalability built into the software. Increasing the size of a cluster is simply a matter of adding more nodes. You do not need to perform any manual operations to redistribute indexes or shards as these tasks are handled automatically, although there are a number of configuration options available that you can use to influence this process.
-
-Adding more nodes helps to improve performance by spreading the load across more machinery. As you add more nodes, you may also need to consider reindexing data to increase the number of shards available. You can preempt this process to some extent by creating indexes that have more shards than there are available nodes initially. When further nodes are added, the shards can be distributed.
-
-Besides taking advantage of the horizontal scalability of Elasticsearch, there are other reasons for implementing indexes that have more shards than nodes. Each shard is implemented as a separate data structure (a [Lucene](https://lucene.apache.org/) index), and has its own internal mechanisms for maintaining consistency and handling concurrency. Creating multiple shards helps to increase parallelism within a node and can improve performance.
-
-However, maintaining performance while scaling is a balancing act. The more nodes and shards a cluster contains, the more effort is required to synchronize the work performed by the cluster, which can decrease throughput. For any given workload, there is an optimal configuration that maximizes ingestion performance while minimizing the maintenance overhead. This configuration is heavily dependent on the nature of the workload and the cluster; specifically, the volume, size, and content of the documents, the rate at which ingestion occurs, and the hardware on which the system runs.
-
-This section summarizes the results of investigations into sizing clusters intended to support the workload used by the performance tests described previously. The same test was performed on clusters with VMs based on the large VM size (Standard D4 with 8 CPU cores, 16 data disks, and 28GB of RAM) running Ubuntu Linux 14.0.4, but configured with different numbers of nodes and shards. The results are not intended to be definitive as they apply only to one specific scenario, but they can act as a good starting point to help you to analyze the horizontal scalability of your clusters, and generate numbers for the optimal ratio of shards to nodes that best meet your own requirements.
-
-### Baseline results – 3 nodes
-To obtain a baseline figure, the data ingestion performance test was run against a 3 node cluster with 5 shards and 1 replica. This is the default configuration for an Elasticsearch index. In this configuration, Elasticsearch distributes 2 primary shards to 2 of the nodes, and the remaining primary shard is stored on the third node. The table below summarizes the throughput in terms of bulk ingestion operations per second, and the number of documents that were successfully stored by the test.
-
-> [!NOTE]
-> In the tables that follow in this section, the distribution of the primary shards is presented as a number for each node separated by dashes. For example, the 5-shard 3-node layout is described as 2-2-1. The layout of replica shards is not included. They will follow a similar scheme to the primary shards.
->
->
-
-| Configuration | Document count | Throughput (operations/s) | Shard layout |
-| --- | --- | --- | --- |
-| 5 shards |200560412 |27.86 |2-2-1 |
-
-### 6-node results
-The test was repeated on a 6 node cluster. The purpose of these tests was to try and ascertain more precisely the effects of storing more than one shard on a node.
-
-| Configuration | Document count | Throughput (operations/s) | Shard layout |
-| --- | --- | --- | --- |
-| 4 shards |227360412 |31.58 |1-1-0-1-1-0 |
-| 7 shards |268013252 |37.22 |2-1-1-1-1-1 |
-| 10 shards |258065854 |35.84 |1-2-2-2-1-2 |
-| 11 shards |279788157 |38.86 |2-2-2-1-2-2 |
-| 12 shards |257628504 |35.78 |2-2-2-2-2-2 |
-| 13 shards |300126822 |41.68 |2-2-2-2-2-3 |
-
-These results appear to indicate the following trends:
-
-* More shards per node improves throughput. With the small number of shards per node created for these tests, this phenomenon was expected, for reasons described earlier.
-* An odd number of shards gives better performance than an even number. The reasons for this are less clear, but it *may* be that the routing algorithm that Elasticsearch uses is better able to distribute the data across shards in this case, leading to a more even load per node.
-
-To test these hypotheses, several further tests were performed with larger numbers of shards. On advice from Elasticsearch, it was decided to use a prime number of shards for each test as these give a reasonable distribution of odd numbers for the range in question.
-
-| Configuration | Document count | Throughput (operations/s) | Shard layout |
-| --- | --- | --- | --- |
-| 23 shards |312844185 |43.45 |4-4-4-3-4-4 |
-| 31 shards |309930777 |43.05 |5-5-5-5-6-5 |
-| 43 shards |316357076 |43.94 |8-7-7-7-7-7 |
-| 61 shards |305072556 |42.37 |10-11-10-10-10-10 |
-| 91 shards |291073519 |40.43 |15-15-16-15-15-15 |
-| 119 shards |273596325 |38.00 |20-20-20-20-20-19 |
-
-These results suggested that a tipping point was reached at around 23 shards. After this point, increasing the number of shards caused a small degradation in performance (the throughput for 43 shards is possibly an anomaly).
-
-### 9-node results
-The tests were repeated using a cluster of 9 nodes, again using a prime number of shards.
-
-| Configuration | Document count | Throughput (operations/s) | Shard layout |
-| --- | --- | --- | --- |
-| 17 shards |325165364 |45.16 |2-2-2-2-2-2-2-2-1 |
-| 19 shards |331272619 |46.01 |2-2-2-2-2-2-2-2-3 |
-| 29 shards |349682551 |48.57 |3-3-3-4-3-3-3-4-3 |
-| 37 shards |352764546 |49.00 |4-4-4-4-4-4-4-4-5 |
-| 47 shards |343684074 |47.73 |5-5-5-6-5-5-5-6-5 |
-| 89 shards |336248667 |46.70 |10-10-10-10-10-10-10-10-9 |
-| 181 shards |297919131 |41.38 |20-20-20-20-20-20-20-20-21 |
-
-These results showed a similar pattern, with a tipping point around 37 shards.
-
-### Scaling out: conclusions
-Using a crude extrapolation, the results of the 6-node and 9-node tests indicated that, for this specific scenario, the ideal number of shards to maximize performance was 4n+/-1, where n is the number of nodes. This *may* be a function of the number of bulk insert threads available, which in turn is dependent on the number of CPU cores, the rationale being as follows (see [Multidocument Patterns](https://www.elastic.co/guide/en/elasticsearch/guide/current/distrib-multi-doc.html#distrib-multi-doc) for details):
-
-* Each bulk insert request sent by the client application is received by a single data node.
-* The data node builds a new bulk insert request for each primary shard affected by the original request and forwards them to the other nodes, in parallel.
-* As each primary shard is written, another request is sent to each replica for that shard. The primary shard waits for the request sent to the replica to complete before finishing.
-
-By default, Elasticsearch creates one bulk insert thread for each available CPU core in a VM. In the case of the D4 VMs used by this test, each CPU contained 8 cores, so 8 bulk insert threads were created. The index used spanned 4 (in one case 5) primary shards on each node, but there were also 4 (5) replicas on each node. Inserting data into these shards and replicas could consume up to 8 threads on each node per request, matching the number available. Increasing or reducing the number of shards might cause threading inefficiencies as threads are possibly left unoccupied or requests are queued. However, without further experimentation this is just a theory and it is not possible to be definitive.
-
-The tests also illustrated one other important point. In this scenario, increasing the number of nodes can improve data ingestion throughput, but the results do not necessarily scale linearly. Conducting further tests with 12 and 15 node clusters could show the point at which scale out brings little additional benefit. If this number of nodes provides insufficient storage space, it may be necessary to return to the scale up strategy and start using more or bigger disks based on premium storage.
-
-> [!IMPORTANT]
-> Do not take the ratio 4n+/-1 as a magic formula that will always work for every cluster. If you have fewer or more CPU cores available, the optimal shard configuration could be different. The findings were based on a specific workload that only did data ingestion. For workloads that also include a mix of queries and aggregations the results could be very diverse.
->
-> Furthermore, the data ingestion workload utilized a single index. In many situations, the data is likely to be spread across multiple indexes leading to different patterns or resource use.
->
-> The important point of this exercise is to understand the method used rather than the results obtained. You should be prepared to perform your own scalability assessment based on your own workloads to obtain information that is most applicable to your own scenario.
->
->
-
-## Tuning large-scale data ingestion
-Elasticsearch is highly configurable, with many switches and settings that you can use to optimize the performance for specific use cases and scenarios. This section describes some common examples. Be aware that the flexibility that Elasticsearch provides in this respect comes with a warning, it is very easy to detune Elasticsearch and make performance worse. When tuning, only make one change at a time, and always measure the effects of any changes to ensure that they are not detrimental to your system.
-
-### Optimizing resources for indexing operations
-The following list describes some points you should consider when tuning an Elasticsearch cluster to support large-scale data ingestion. The first two items are most likely to have an immediately obvious effect on performance while the remainder are more marginal, depending on the workload:
-
-* New documents added to an index only become visible to searches when the index is refreshed. Refreshing an index is an expensive operation, so it is only performed periodically rather than as each document is created. The default refresh interval is 1 second. If you are performing bulk operations, you should consider temporarily disabling index refreshes. Set the index *refresh\_interval* to -1.
-
- ```http
- PUT /my_busy_index
- {
- "settings" : {
- "refresh_interval": -1
- }
- }
- ```
-
- Trigger a refresh manually by using the [*\_refresh*](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-refresh.html) API at the end of the operation to make the data visible. See [Bulk Indexing Usage](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html#bulk) for more information. Further details on the [impact of changing the refresh interval on data ingestion](#the-impact-of-changing-the-index-refresh-interval-on-data-ingestion-performance) are described later.
-* If an index is replicated, each indexing operation (document create, update, or delete) is repeated on the replica shards as they occur in the primary shard. Consider disabling replication during bulk import operations and then reenable it when the import is complete:
-
- ```http
- PUT /my_busy_index
- {
- "settings" : {
- "number_of_replicas": 0
- }
- }
- ```
-
- When you reenable replication, Elasticsearch performs a byte-for-byte network transfer of data from the index to each replica. This is more efficient than repeating the indexing process document by document on each node. The risk is that data can be lost of the primary node fails while performing the bulk import, but recovery may simply be a matter of starting the import again. The [impact of replication on data ingestion performance](#the-impact-of-replicas-on-data-ingestion-performance) is described in more detail later.
-* Elasticsearch attempts to balance the resources available between those required for querying and those required for ingesting data. As a result, it may throttle data ingestion performance (throttling events are recorded in the Elasticsearch log). This restriction is intended to prevent a large number of index segments from being created concurrently that require merging and saving to disk, a process that can monopolize resources. If your system is not currently performing queries, you can disable data ingestion throttling. This should allow indexing to maximize performance. You can disable throttling for an entire cluster as follows:
-
- ```http
- PUT /_cluster/settings
- {
- "transient" : {
- "indices.store.throttle.type": "none"
- }
- }
- ```
-
- Set the throttle type of the cluster back to *"merge"* when ingestion has completed. Also note that disabling throttling may lead to instability in the cluster, so ensure that you have procedures in place that can recover the cluster if necessary.
-* Elasticsearch reserves a proportion of the heap memory for indexing operations, the remainder is mostly used by queries and searches. The purpose of these buffers is to reduce the number of disk I/O operations, with the aim of performing fewer, larger writes than more, smaller writes. The default proportion of heap memory allocated is 10%. If you are indexing a large volume of data then this value might be insufficient. For systems that support high-volume data ingestion, you should allow up to 512MB of memory for each active shard in the node. For example, if you are running Elasticsearch on D4 VMs (28GB RAM) and have allocated 50% of the available memory to the JVM (14GB), then 1.4GB will be available for use by indexing operations. If a node contains 3 active shards, then this configuration is probably sufficient. However, if a node contains more shards than this, consider increasing the value of the *indices.memory.index\_buffer\_size* parameter in the elasticsearch.yml configuration file. For more information, see [Performance Considerations for Elasticsearch Indexing](https://www.elastic.co/blog/performance-considerations-elasticsearch-indexing).
-
- Allocating more than 512MB per active shard will most likely not improve indexing performance and may actually be detrimental as less memory is available for performing other tasks. Also be aware that allocating more heap space for index buffers removes memory for other operations such as searching and aggregating data, and can slow the performance of query operations.
-* Elasticsearch restricts the number of threads (the default value is 8) that can concurrently perform indexing operations in a shard. If a node only contains a small number of shards, then consider increasing the *index\_concurrency* setting for an index that is subject to a large volume of indexing operations, or is the target of a bulk insert, as follows:
-
- ```http
- PUT /my_busy_index
- {
- "settings" : {
- "index_concurrency": 20
- }
- }
- ```
-* If you are performing a large number of indexing and bulk operations for a short period of time, you can increase the number of *index* and *bulk* threads available in the thread pool and extend the size of the *bulk insert* queue for each data node. This will allow more requests to be queued rather than being discarded. For more information, see [Thread Pool](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-threadpool.html). If you are performing sustained high levels of data ingestion, then increasing the number of bulk threads is not recommended. Instead create additional nodes and use sharding to distribute the indexing load across these nodes. Alternatively, consider sending bulk insert batches serially rather than in parallel as this will act as a natural throttling mechanism that can reduce the chances of the errors due to a bulk insert queue overflowing.
-
-### The impact of changing the index refresh interval on data ingestion performance
-The refresh interval governs the rate at which ingested data becomes visible to queries and aggregations, but frequent refreshes can impact the performance of data ingestion operations. The default refresh interval is 1 second. You can disable refreshing completely, but this might not be appropriate to your workload. You can experiment by trying different intervals and establishing the sweet spot that balances ingestion performance against the need to present up to date information.
-
-As an example of the impact, the data ingestion performance test was repeated on an Elasticsearch cluster consisting of 7 shards spread across 3 data nodes. The index had a single replica. Each data node was based on a D4 VM (28GB RAM, 8 processor cores) using SSD-backed ephemeral storage to hold the data. Each test ran for 1 hour.
-
-In this test, the refresh rate was set to the default value of 1 second. The following table shows the throughput and response times for this test compared to a separate run where the refresh rate was reduced to once every 30 seconds.
-
-| Refresh rate | Sample count | Average response time – successful operations (ms) | Throughput – successful operations (operations/s) |
-| --- | --- | --- | --- |
-| 1 second |93755 |460 |26.0 |
-| 30 seconds |117758 |365 |32.7 |
-
-In this test, dropping the refresh rate resulted in an 18% improvement in throughput, and a 21% reduction in average response time. The following graphs generated using Marvel illustrate the primary reason for this difference. The figures below show the index merge activity that occurred with the refresh interval set to 1 second and 30 seconds.
-
-Index merges are performed to prevent the number of in-memory index segments from becoming too numerous. A 1 second refresh interval generates a large number of small segments which have to be merged frequently, whereas a 30 second refresh interval generates fewer large segments which can be merged more optimally.
-
-![Index merge activity](./images/data-ingestion-image15.png)
-
-***Index merge activity for an index refresh rate of 1 second***
-
-![Index merge activity](./images/data-ingestion-image16.png)
-
-***Index merge activity for an index refresh rate of 30 seconds***
-
-### The impact of replicas on data ingestion performance
-Replicas are an essential feature of any resilient cluster, and without using them you risk losing information if a node fails. However, replicas increase the amount of disk and network I/O being performed and can be detrimental to the rate at which data is ingested. For reasons described earlier, it may be beneficial to temporarily disable replicas for the duration of large-scale data upload operations.
-
-Data ingestion performance tests were repeated using three configurations:
-
-* Using a cluster with no replicas.
-* Using a cluster with 1 replica.
-* Using a cluster with 2 replicas.
-
-In all cases, the cluster contained 7 shards spread across 3 nodes and ran on VMs configured as described in the previous set of tests. The test index used a refresh interval of 30 seconds.
-
-The following table summarizes the response times and throughput of each test for comparison purposes:
-
-| Configuration | Sample count | Average response time – successful operations (ms) | Throughput – successful operations (operations/s) | Data Ingestion errors |
-| --- | --- | --- | --- | --- |
-| 0 replicas |215451 |200 |59.8 |0 |
-| 1 replica |117758 |365 |32.7 |0 |
-| 2 replicas |94218 |453 |26.1 |194262 |
-
-The decline in performance as the number of replicas increases is clear, but you should also notice the large volume of data ingestion errors in the third test. The messages generated by these errors indicated that they were due to the bulk insert queue overflowing causing requests to be rejected. These rejections occurred very quickly, that's why the number is large.
-
-> [!NOTE]
-> The results of the third test highlight the importance of using an intelligent retry strategy when transient errors such as this occur—back off for a short period to allow the bulk insert queue to drain before reattempting to repeat the bulk insert operation.
->
->
-
-The next sets of graphs compare the response times during the tests. In each case the first graph shows the overall response times, while the second graph zooms in on the response times for the fastest operations (note that the scale of the first graph is ten times that of the second). You can see how the profile of the response times varies across the three tests.
-
-With no replicas, most operations took between 75ms and 750ms, with the quickest response times around 25ms:
-
-![Response times graph](./images/data-ingestion-image17.png)
-
-With 1 replica the most populated operational response time was in the range 125ms to 1250ms. The quickest responses took approximately 75ms, although there were fewer of these quick responses than in the 0 replicas case. There were also far more responses that took significantly longer than the most common cases, in excess of 1250ms:
-
-![Response times graph](./images/data-ingestion-image18.png)
-
-With 2 replicas, the most populated response time range was 200ms to 1500ms, but there were far fewer results below the minimum range than in the 1 replica test. However, the pattern of results above the upper limit were very similar to that of the 1 replica test. This is most likely due to the effects of the bulk insert queue overflowing (exceeding a queue length of 50 requests). The additional work required to maintain 2 replicas causes the queue to overflow more frequently, preventing ingestion operations from having excessive response times. Operations are rejected quickly rather than taking a lengthy period of time, possibly causing timeout exceptions or impacting the responsiveness of client applications (this is the purpose of the bulk insert queue mechanism):
-
-![Response times graph](./images/data-ingestion-image19.png)
-
-Using Marvel, you can see the effect of the number of replicas on the bulk index queue. The figure below shows the data from Marvel that depicts how the bulk insert queue filled during the test. The average queue length was around 40 requests, but periodic bursts caused it to overflow and requests were rejected as a result:
-
-![Marvel screenshot](./images/data-ingestion-image20.png)
-
-***Bulk index queue size and number of requests rejected with 2 replicas.***
-
-You should compare this with the figure below which shows the results for a single replica. The Elasticsearch engine was able to process requests quickly enough to keep the average queue length at around 25, and at no point did the queue length exceed 50 requests so no work was rejected.
-
-![Marvel screenshot](./images/data-ingestion-image21.png)
-
-***Bulk index queue size and number of requests rejected with 1 replica.***
-
-## Best practices for clients sending data to Elasticsearch
-Many aspects of performance are concerned not only internally within the system but with how the system is used by client applications. Elasticsearch provides many features that can be utilized by the data ingestion process; generating unique identifiers for documents, performing document analysis, and even using scripting to transform the data as it is stored are some examples. However, these functions all add to the load on the Elasticsearch engine, and in many cases can be performed more efficiently by client applications prior to transmission.
-
-> [!NOTE]
-> This list of best practices is primarily concerned with ingesting new data rather modifying existing data already stored in an index. Ingestion workloads are performed as append operations by Elasticsearch, whereas data modifications are performed as delete/append operations. This is because documents in an index are immutable, so modifying a document involves replacing the entire document with a new version. You can either perform an HTTP PUT request to overwrite an existing document, or you can use the Elasticsearch *update* API that abstracts a query to fetch an existing document, merges the changes, and then performs a PUT to store the new document.
->
->
-
-Additionally, consider implementing the following practices where appropriate:
-
-* Disable text analysis for index fields that do not need to be analyzed. Analysis involves tokenizing text to enable queries that can search for specific terms. However, it can be a CPU-intensive task, so be selective. If you are using Elasticsearch to store log data, it might be useful to tokenize the detailed log messages to allow complex searches. Other fields, such as those containing error codes or identifiers should probably not be tokenized (how frequently are you likely to request the details of all messages whose error code contains a "3", for example?) The following code disables analysis for the *name* and *hostip* fields in the *logs* type of the *systembase* index.
-
- ```http
- PUT /systembase
- {
- "settings" : {
- ...
- },
- "logs" : {
- ...
- "name": {
- "type": "string",
- "index" : "not_analyzed"
- },
- "hostip": {
- "type": "string",
- "index" : "not_analyzed"
- },
- ...
- }
- }
- ```
-* Disable the *_all* field of an index if it is not required. The *\_all* field concatenates the values of the other fields in the document for analysis and indexing. It is useful for performing queries that can match against any field in a document. If clients are expected to match against named fields, then enabling *\_all* simply incurs CPU and storage overhead. The following example shows how to disable the *\_all* field for the *logs* type in the *systembase* index.
-
- ```http
- PUT /systembase
- {
- "settings" : {
- ...
- },
- "logs" : {
- "_all": {
- "enabled" : false
- },
- ...,
- ...
- }
- }
- ```
-
- Note that you can create a selective version of *\_all* that only contains information from specific fields. For more information, see [Disabling the \_all Field](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-all-field.html#disabling-all-field).
-* Avoid dynamic mappings in indexes. Dynamic mapping is a powerful feature, but adding new fields to an existing index requires coordinating changes to the index structure across nodes and can temporarily cause the index to be locked. Dynamic mapping can also lead to an explosion in the number of fields and the consequent volume of index metadata if not used carefully. In turn, this results in increased storage requirements and I/O, both for ingesting data and when performing queries. Both of these issues will impact performance. Consider disabling dynamic mapping and define your index structures explicitly. For more information, see [Dynamic Field Mapping](https://www.elastic.co/guide/en/elasticsearch/reference/current/dynamic-field-mapping.html#dynamic-field-mapping).
-* Understand how to balance the workload to meet conflicting requirements. You should always consider that data ingestion can have a significant impact on the performance of other concurrent operations, such as users performing queries. Data ingestion might be subject to sudden bursts, and if the system attempts to consume all the data arriving immediately the influx could cause query rates to slow down to a trickle. Elasticsearch tries to prevent this situation from occurring by regulating the rate at which it will process ingestion requests through the bulk insert queue (refer to the section [Determining limiting factors – CPU utilization](#determining-limiting-factors-cpu-utilization) for further information), but this mechanism should really be treated as a last resort. If your application code is not prepared to handle rejected requests you risk losing data. Instead, consider using a pattern such as [Queue-Based Load Levelling](https://msdn.microsoft.com/library/dn589783.aspx) to control the rate at which data is passed to Elasticsearch.
-* Ensure that your cluster has sufficient resources to handle the workload, especially if indexes are configured with multiple replicas.
-* Use the bulk insert API to upload large batches of documents. Size bulk requests appropriately. Sometimes bigger batches are not better for performance, and can cause Elasticsearch threads and other resources to become overloaded, delaying other concurrent operations. The documents in a bulk insert batch are held in memory at the coordinating node while the operation is performed. The physical size of each batch is more important than the document count. There is no hard and fast rule as to what constitutes the ideal batch size, although Elasticsearch documentation recommends using between 5MB and 15MB as a starting point for your own investigations. Conduct performance testing to establish the optimal batch size for your own scenarios and workload mix.
-* Make sure that bulk insert requests are distributed across nodes rather than directed to a single node. Directing all requests to a single node can cause memory exhaustion as each bulk insert request being processed is stored in memory in the node. It can also increase network latency as requests are redirected to other nodes.
-* Elasticsearch uses a quorum consisting of a majority of the primary and replica nodes when writing data. A write operation is not completed until the quorum reports success. This approach helps to ensure that data is not written if a majority of the nodes are unavailable due to a network partition (failure) event. Using a quorum can slow the performance of write operations. You can disable quorum-based writing by setting the *consistency* parameter to *one* when writing data. The following example adds a new document but completes as soon as the write to the primary shard has completed.
-
- ```http
- PUT /my_index/my_data/104?consistency=one
- {
- "name": "Bert",
- "age": 23
- }
- ```
-
- Note that as with asynchronous replication, disabling quorum-based writing can lead to inconsistencies between the primary shard and each of the replicas.
-* When using quorums, Elasticsearch will wait if insufficient nodes are available before determining that a write operation should be cancelled because a quorum cannot be reached. This wait period is determined by the timeout query parameter (the default is 1 minute). You can modify this setting by using the timeout query parameter. The example below creates a new document and waits for a maximum of 5 seconds for the quorum to respond before cancelling:
-
- ```http
- PUT /my_index/my_data/104?timeout=5s
- {
- "name": "Sid",
- "age": 27
- }
- ```
-
- Elasticsearch also lets you use your own version numbers [generated externally](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html#_version_types).
-* Consider disabling the *\_source* field of an index. This field contains a copy of the original JSON document that was used when a document is stored. Saving this field incurs additional storage costs and disk I/O. However, these costs may be marginal depending on the document structure, and you should also be aware that disabling the *\_source* field prevents a client from being able to perform the following operations:
-
- * Using the Update API to modify a document.
- * Performing on the fly highlighting when running queries.
- * Reindexing data.
- * Debugging queries and aggregations by viewing the original document.
-
- The following example disables the *\_source* field for the *logs* type in the *systembase* index.
-
- ```http
- PUT /systembase
- {
- "settings" : {
- ...
- },
- "logs" : {
- "_source": {
- "enabled": false
- },
- ...,
- ...
- }
- }
- ```
-
-## General guidelines for conducting data ingestion performance testing with Elasticsearch
-The following points highlight some of the items you should consider when running performance tests with Elasticsearch and analyzing the results.
-
-* Performance testing is necessarily time consuming and costly. At a minimum, gather statistics that measure transfer rates on disk and network, CPU utilization, CPU wait times and disk latency (if possible). This can give you fast feedback on your test effort with good return of investment.
-* Leverage any scripting capabilities provided by your load testing tool to collect metrics that might not otherwise be available. For example, Linux has a variety of reliable performance statistics that you can gather using utilities such as *vmstat* and *iostat*. You can use scripting with JMeter to capture this data as part of a test plan.
-* Performance engineering is mostly about analyzing statistics based on reliable and repeatable data. Do not stop at high-level metrics that will not give the necessary insights. Educate yourself with the data and make performance engineering a dev ops process with a fast feedback loop. Always look at the statistics comparing trends and past results/configurations. By doing this on a regular basis you will generate data that you will understand, is repeatable with your workloads, and with which you will be able to assess the effects of changes in configuration and deployment.
-* Use a tool such as Marvel to monitor cluster and node performance while testing to gain additional insights. JMeter can be effective for capturing raw data for subsequent analysis, but using Marvel can give you a real-time feel for how performance is faring and the possible causes of glitches and slow downs. Additionally, many load test tools do not provide visibility to the inner metrics of Elasticsearch. Use and compare indexing throughput rates, merge segment counts, garbage collector (GC) statistics, and throttling times available in index statistics. Repeat this analysis on a regular basis.
-* Compare your load test tool statistics with node statistics in Marvel (disk and network traffic, CPU utilization, memory and thread pool usage) to understand the correlation pattern between the figures reported by the infrastructure and specific Elasticsearch statistics.
-* As a general rule consider *one node one shard* as the baseline for performance testing and assess application costs by adding nodes. However, do not depend completely on extrapolating performance based on a small number of nodes and shards. Synchronization and communication costs in the cluster can have unpredictable effects the larger the number of nodes and shards.
-* Look at the shard allocation across nodes to compare the statistics. Some nodes will have fewer replicas and shards which will create an imbalance of resource utilization.
-* If you are performing load testing, increase the number of threads that your test tool uses to submit work to the cluster until errors occur. For sustainable throughput testing, consider maintaining your test level below the expected peak load. If the error rate exceeds the ceiling, errors will incur cost on backend resources due to recoverability. In these situations, throughput will inevitably diminish.
-* To simulate how your system reacts to an unexpectedly large burst of activity, consider running tests that generate an error rate that exceeds your expected peak load. This will give you throughput figures not only in terms of capacity but also the cost of recoverability.
-* Use a document count to assess your performance profile, and recycle documents following your workload patterns. Consider that as more documents are added, the performance profile might change.
-* Be aware of the SLAs for IOPS and transfer rates limits for the storage you are using. Different storage types (SSD, spinning media) have different transfer rates.
-* Remember that CPU performance can drop not only because of disk and network activity, but because backend applications can use locking and communication mechanisms with distributed processing that may cause underutilization of the processor.
-* Run performance tests for at least two hours (not a few minutes). Indexing can affect performance in ways which may not be visible immediately. For example, JVM garbage collection statistics and indexing merges can change the performance profile over time.
-* Consider how index refreshes might big impact data ingestion throughput and throttling with a cluster.
-
-## Summary
-It is important to understand how to scale your solution as the volumes of data and the number of requests increases. Elasticsearch running on Azure enables vertical and horizontal scaling. You can run on bigger VMs with more resources, and you can distribute an Elasticsearch cluster across a network of VMs. The range of options can be confusing. Is it more cost-effective to implement a cluster on a large number of small VMs, on a cluster with a small number of large VMs, or somewhere in the middle? Also, how many shards should each index contain, and what are the tradeoffs concerning data ingestion versus query performance? The way in which shards are distributed across nodes can have a significant impact on the data ingestion throughput. Using more shards can reduce the amount of internal contention that occurs within a shard, but you must balance this benefit with the overhead that using many shards can impose on a cluster. To answer these questions effectively, you should be prepared to test your system to determine the most appropriate strategy.
-
-For data ingestion workloads, the performance of the disk I/O subsystem is a critical factor. Using SSDs can boost throughput by reducing the disk latency of write operations. If you don't need vast amounts of disk space on a node, consider using standard VMs with ephemeral storage in place of more expensive VMs that support premium storage.
-
-## Appendix: the bulk load data ingestion performance test
-This appendix describes the performance test performed against the Elasticsearch cluster. The tests were run by using JMeter running on a separate set of VMs. Details the configuration of the test environment are described in [Creating a performance testing environment for Elasticsearch on Azure][Creating a performance testing environment for Elasticsearch on Azure]. To perform your own testing, you can create your own JMeter test plan manually, or you can use the automated test scripts available separately. See [Running the automated Elasticsearch performance tests][Running the automated Elasticsearch performance tests] for further information.
-
-The data ingestion workload performed a large-scale upload of documents by using the bulk insert API. The purpose of this index was to simulate a repository receiving log data representing system events for subsequent search and analysis. Each document was stored in a single index named *systembase*, and had the type *logs*. All documents had the same fixed schema described by the following table:
-
-| Field | Datatype | Example |
-| --- | --- | --- |
-| @timestamp |datetime |2013-12-11T08:01:45.000Z |
-| name |string |checkout.payment |
-| message |string |Incoming request message |
-| severityCode |integer |1 |
-| severity |string |info |
-| hostname |string |sixshot |
-| hostip |string (ip address) |10.0.0.4 |
-| pid |int |123 |
-| tid |int |4325 |
-| appId |string (uuid) |{00000000-0000-0000-000000000000} |
-| appName |string |mytestapp |
-| appVersion |string |0.1.0.1234 |
-| type |int |5 |
-| subtype |int |1 |
-| correlationId |guid |{00000000-0000-0000-000000000000} |
-| os |string |Linux |
-| osVersion |string |4.1.1 |
-| parameters |[ ] |{key:value,key:value} |
-
-You can use the following request to create the index. The *number\_of\_replicas*, *refresh\_interval*, and *number\_of\_shards* settings varied from the values shown below in many of the tests.
-
-> [!IMPORTANT]
-> The index was dropped and recreated prior to each test run.
->
->
-
-```http
-PUT /systembase
-{
- "settings" : {
- "number_of_replicas": 1,
- "refresh_interval": "30s",
- "number_of_shards": "5"
- },
- "logs" : {
- "properties" : {
- "@timestamp": {
- "type": "date",
- "index" : "not_analyzed"
- },
- "name": {
- "type": "string",
- "index" : "not_analyzed"
- },
- "message": {
- "type": "string",
- "index" : "not_analyzed"
- },
- "severityCode": {
- "type": "integer",
- "index" : "not_analyzed"
- },
- "severity": {
- "type": "string",
- "index" : "not_analyzed"
- },
- "hostname": {
- "type": "string",
- "index" : "not_analyzed"
- },
- "hostip": {
- "type": "string",
- "index" : "not_analyzed"
- },
- "pid": {
- "type": "integer",
- "index" : "not_analyzed"
- },
- "tid": {
- "type": "integer",
- "index" : "not_analyzed"
- },
- "appId": {
- "type": "string",
- "index" : "not_analyzed"
- },
- "appName": {
- "type": "string",
- "index" : "not_analyzed"
- },
- "appVersion": {
- "type": "integer",
- "index" : "not_analyzed"
- },
- "type": {
- "type": "integer",
- "index" : "not_analyzed"
- },
- "subtype": {
- "type": "integer",
- "index" : "not_analyzed"
- },
- "correlationId": {
- "type": "string",
- "index" : "not_analyzed"
- },
- "os": {
- "type": "string",
- "index" : "not_analyzed"
- },
- "osVersion": {
- "type": "string",
- "index" : "not_analyzed"
- },
- "parameters": {
- "type": "string",
- "index" : "not_analyzed"
- }
- }
- }
-}
-```
-
-Each bulk insert batch contained 1000 documents. Each document was generated based on a combination of random values for the *severityCode*, *hostname*, *hostip*, *pid*, *tid*, *appName*, *appVersion*, *type*, *subtype*, and *correlationId* fields, and a random selection of text from a fixed set of terms for the *name*, *message*, *severity*, *os*, *osVersion*, *parameters*, *data1*, and *data2* fields. The number of client application instances used to upload data was carefully selected to maximize successful input volume. Tests ran for two hours to enable the cluster to settle and reduce the influence of any temporary glitches in the overall results. In this time, some tests uploaded nearly 1.5 billion documents.
-
-The data was generated dynamically by using a custom JUnit request sampler that was added to a thread group in a JMeter test plan. The JUnit code was created by using the JUnit test case template in the Eclipse IDE.
-
-> [!NOTE]
-> For information on how to create a JUnit test for JMeter, see [Deploying a JMeter JUnit sampler for testing Elasticsearch performance][Deploying a JMeter JUnit sampler for testing Elasticsearch performance].
->
->
-
-The following snippet shows the Java code for testing Elasticsearch 1.7.3. Note that the JUnit test class in this example is named *ElasticsearchLoadTest2*:
-
-```java
-/* Java */
-package elasticsearchtest2;
-
- import static org.junit.Assert.*;
-
- import org.junit.*;
-
- import java.util.*;
-
- import java.io.*;
-
- import org.elasticsearch.action.bulk.*;
- import org.elasticsearch.common.transport.*;
- import org.elasticsearch.client.transport.*;
- import org.elasticsearch.common.settings.*;
- import org.elasticsearch.common.xcontent.*;
-
- public class ElasticsearchLoadTest2 {
-
- private String [] names={"checkout","order","search","payment"};
- private String [] messages={"Incoming request from code","incoming operation succeeded with code","Operation completed time","transaction performed"};
- private String [] severity={"info","warning","transaction","verbose"};
- private String [] apps={"4D24BD62-20BF-4D74-B6DC-31313ABADB82","5D24BD62-20BF-4D74-B6DC-31313ABADB82","6D24BD62-20BF-4D74-B6DC-31313ABADB82","7D24BD62-20BF-4D74-B6DC-31313ABADB82"};
-
- private String hostname = "";
- private String indexstr = "";
- private String typestr = "";
- private int port = 0;
- private int itemsPerInsert = 0;
- private String clustername = "";
- private static Random rand=new Random();
-
- @Before
- public void setUp() throws Exception {
- }
-
- public ElasticsearchLoadTest2(String paras) {
- \* Paras is a string containing a set of comma separated values for:
- hostname
- indexstr
- typestr
- port
- clustername
- node
- itemsPerInsert
- */
-
- // Note: No checking/validation is performed
-
- String delims = "[ ]*,[ ]*"; // comma surrounded by zero or more spaces
- String[] items = paras.split(delims);
-
- hostname = items[0];
- indexstr = items[1];
- typestr = items[2];
- port = Integer.parseInt(items[3]);
- clustername = items[4];
- itemsPerInsert = Integer.parseInt(items[5]);
-
- if (itemsPerInsert == 0)
- itemsPerInsert = 1000;
- }
-
- @After
- public void tearDown() throws Exception {
- }
-
- @Test
- public void BulkBigInsertTest() throws IOException {
-
- Settings settings = ImmutableSettings.settingsBuilder().put("cluster.name", clustername).build();
-
- TransportClient client;
- client = new TransportClient(settings);
-
- try {
- client.addTransportAddress(new InetSocketTransportAddress(hostname, port));
- BulkRequestBuilder bulkRequest = client.prepareBulk();
- Random random = new Random();
- char[] exmarks = new char[12000];
- Arrays.fill(exmarks, 'x');
- String dataString = new String(exmarks);
-
- for(int i=1; i < itemsPerInsert; i++){
- random.nextInt(10);
- int host=random.nextInt(20);
-
- bulkRequest.add(client.prepareIndex(indexstr, typestr).setSource(XContentFactory.jsonBuilder().startObject()
- .field("@timestamp", new Date())
- .field("name", names[random.nextInt(names.length)])
- .field("message", messages[random.nextInt(messages.length)])
- .field("severityCode", random.nextInt(10))
- .field("severity", severity[random.nextInt(severity.length)])
- .field("hostname", "Hostname"+host)
- .field("hostip", "10.1.0."+host)
- .field("pid",random.nextInt(10))
- .field("tid",random.nextInt(10))
- .field("appId", apps[random.nextInt(apps.length)])
- .field("appName", "application" + host)
- .field("appVersion", random.nextInt(5))
- .field("type", random.nextInt(6))
- .field("subtype", random.nextInt(6))
- .field("correlationId", UUID.randomUUID().toString())
- .field("os", "linux")
- .field("osVersion", "14.1.5")
- .field("parameters", "{key:value,key:value}")
- .field("data1",dataString)
- .field("data2",dataString)
- .endObject()));
- }
-
- BulkResponse bulkResponse = bulkRequest.execute().actionGet();
- assertFalse(bulkResponse.hasFailures());
- }
- finally {
- client.close();
- }
- }
-
- @Test
- public void BulkDataInsertTest() throws IOException {
- Settings settings = ImmutableSettings.settingsBuilder().put("cluster.name", clustername).build();
-
- TransportClient client;
- client = new TransportClient(settings);
-
- try {
- client.addTransportAddress(new InetSocketTransportAddress(hostname, port));
- BulkRequestBuilder bulkRequest = client.prepareBulk();
-
- for(int i=1; i< itemsPerInsert; i++){
- rand.nextInt(10);
- int host=rand.nextInt(20);
-
- bulkRequest.add(client.prepareIndex(indexstr, typestr).setSource(XContentFactory.jsonBuilder().startObject()
- .field("@timestamp", new Date())
- .field("name", names[rand.nextInt(names.length)])
- .field("message", messages[rand.nextInt(messages.length)])
- .field("severityCode", rand.nextInt(10))
- .field("severity", severity[rand.nextInt(severity.length)])
- .field("hostname", "Hostname" + host)
- .field("hostip", "10.1.0."+host)
- .field("pid",rand.nextInt(10))
- .field("tid",rand.nextInt(10))
- .field("appId", apps[rand.nextInt(apps.length)])
- .field("appName", "application"+host)
- .field("appVersion", rand.nextInt(5))
- .field("type", rand.nextInt(6))
- .field("subtype", rand.nextInt(6))
- .field("correlationId", UUID.randomUUID().toString())
- .field("os", "linux")
- .field("osVersion", "14.1.5")
- .field("parameters", "{key:value,key:value}")
- .endObject()));
- }
-
- BulkResponse bulkResponse = bulkRequest.execute().actionGet();
- assertFalse(bulkResponse.hasFailures());
- }
- finally {
- client.close();
- }
- }
- }
-```
-
-The private *String* arrays *names*, *messages*, *severity*, and *apps* contain a small set of values that items are selected from at random. The remaining data items for each document are generated at runtime.
-
-The constructor that takes the *String* parameter is invoked from JMeter, and the values passed in the string are specified as part of the JUnit Request sampler configuration. For this JUnit test, the *String* parameter is expected to contain the following information:
-
-* **Hostname**. This is the name or IP address of the Azure load balancer. The load balancer attempts to distribute the request across the data nodes in the cluster. If you are not using a load balancer you can specify the address of a node in the cluster, but all requests will be directed to that node and that might result in it becoming a bottleneck.
-* **Indexstr**. This is the name of the index that the data generated by the JUnit test is added to. If you created the index as described above, this value should be *systembase*.
-* **Typestr**. This is the type in the index where the data is stored. If you created the index as described above, this value should be *logs*.
-* **Port**. This is the port to connect to on the host. In most cases this should be set to 9300 (the port used by Elasticsearch to listen for client API requests, port 9200 is only used for HTTP requests).
-* **Clustername**. This is the name of the Elasticsearch cluster containing the index.
-* **ItemsPerInsert**. This is a numeric parameter indicating the number of documents to add in each bulk insert batch. The default batch size is 1000.
-
-You specify the data for the constructor string in the JUnit Request page used to configure the JUnit sampler in JMeter. The following image shows an example:
-
-![JUnit Request UI](./images/data-ingestion-image22.png)
-
-The *BulkInsertTest* and *BigBulkInsertTest* methods perform the actual work of generating and uploading the data. Both methods are very similar. They connect to the Elasticsearch cluster and then create a batch of documents (as determined by the *ItemsPerInsert* constructor string parameter). The documents are added to the index using the Elasticsearch Bulk API. The difference between the two methods is that the *data1* and *data2* string fields in each document are omitted from the upload in the *BulkInsertTest* method, but are filled in with strings of 12000 characters in the *BigBulkInsertTest* method. Note that you select which of these methods to run using the *Test Method* box in the JUnit Request page in JMeter (highlighted in the previous figure).
-
-> [!NOTE]
-> The sample code presented here uses the Elasticsearch 1.7.3 Transport Client library. If you are using Elasticsearch 2.0.0 or later, you must use the appropriate library for the version selected. For more information about the Elasticsearch 2.0.0 Transport Client library, see the [Transport Client](https://www.elastic.co/guide/en/elasticsearch/client/java-api/2.0/transport-client.html) page on the Elasticsearch website.
->
->
-
-[Configuring Resilience and Recovery on Elasticsearch on Azure]: resilience-and-recovery.md
-[Creating a Performance Testing Environment for Elasticsearch on Azure]: performance-testing-environment.md
-[Running the Automated Elasticsearch Performance Tests]: automated-performance-tests.md
-[Deploying a JMeter JUnit Sampler for Testing Elasticsearch Performance]: jmeter-junit-sampler.md
diff --git a/docs/elasticsearch/images/data-ingestion-image1.png b/docs/elasticsearch/images/data-ingestion-image1.png
deleted file mode 100644
index dd388728393..00000000000
Binary files a/docs/elasticsearch/images/data-ingestion-image1.png and /dev/null differ
diff --git a/docs/elasticsearch/images/data-ingestion-image10.png b/docs/elasticsearch/images/data-ingestion-image10.png
deleted file mode 100644
index f03e4d3dbff..00000000000
Binary files a/docs/elasticsearch/images/data-ingestion-image10.png and /dev/null differ
diff --git a/docs/elasticsearch/images/data-ingestion-image11.png b/docs/elasticsearch/images/data-ingestion-image11.png
deleted file mode 100644
index 4bc967fd39a..00000000000
Binary files a/docs/elasticsearch/images/data-ingestion-image11.png and /dev/null differ
diff --git a/docs/elasticsearch/images/data-ingestion-image12.png b/docs/elasticsearch/images/data-ingestion-image12.png
deleted file mode 100644
index fd2d7485a34..00000000000
Binary files a/docs/elasticsearch/images/data-ingestion-image12.png and /dev/null differ
diff --git a/docs/elasticsearch/images/data-ingestion-image13.png b/docs/elasticsearch/images/data-ingestion-image13.png
deleted file mode 100644
index 09baaba1c5b..00000000000
Binary files a/docs/elasticsearch/images/data-ingestion-image13.png and /dev/null differ
diff --git a/docs/elasticsearch/images/data-ingestion-image14.png b/docs/elasticsearch/images/data-ingestion-image14.png
deleted file mode 100644
index 467841fae82..00000000000
Binary files a/docs/elasticsearch/images/data-ingestion-image14.png and /dev/null differ
diff --git a/docs/elasticsearch/images/data-ingestion-image15.png b/docs/elasticsearch/images/data-ingestion-image15.png
deleted file mode 100644
index 7bc79e0ca07..00000000000
Binary files a/docs/elasticsearch/images/data-ingestion-image15.png and /dev/null differ
diff --git a/docs/elasticsearch/images/data-ingestion-image16.png b/docs/elasticsearch/images/data-ingestion-image16.png
deleted file mode 100644
index 1c3e07d7f01..00000000000
Binary files a/docs/elasticsearch/images/data-ingestion-image16.png and /dev/null differ
diff --git a/docs/elasticsearch/images/data-ingestion-image17.png b/docs/elasticsearch/images/data-ingestion-image17.png
deleted file mode 100644
index 156eb6c265f..00000000000
Binary files a/docs/elasticsearch/images/data-ingestion-image17.png and /dev/null differ
diff --git a/docs/elasticsearch/images/data-ingestion-image18.png b/docs/elasticsearch/images/data-ingestion-image18.png
deleted file mode 100644
index 4c7bd46c9cf..00000000000
Binary files a/docs/elasticsearch/images/data-ingestion-image18.png and /dev/null differ
diff --git a/docs/elasticsearch/images/data-ingestion-image19.png b/docs/elasticsearch/images/data-ingestion-image19.png
deleted file mode 100644
index a34da0318c1..00000000000
Binary files a/docs/elasticsearch/images/data-ingestion-image19.png and /dev/null differ
diff --git a/docs/elasticsearch/images/data-ingestion-image2.png b/docs/elasticsearch/images/data-ingestion-image2.png
deleted file mode 100644
index 817a46e81ed..00000000000
Binary files a/docs/elasticsearch/images/data-ingestion-image2.png and /dev/null differ
diff --git a/docs/elasticsearch/images/data-ingestion-image20.png b/docs/elasticsearch/images/data-ingestion-image20.png
deleted file mode 100644
index 820a7ebb037..00000000000
Binary files a/docs/elasticsearch/images/data-ingestion-image20.png and /dev/null differ
diff --git a/docs/elasticsearch/images/data-ingestion-image21.png b/docs/elasticsearch/images/data-ingestion-image21.png
deleted file mode 100644
index 640405c6ad7..00000000000
Binary files a/docs/elasticsearch/images/data-ingestion-image21.png and /dev/null differ
diff --git a/docs/elasticsearch/images/data-ingestion-image22.png b/docs/elasticsearch/images/data-ingestion-image22.png
deleted file mode 100644
index e0a7300d1a3..00000000000
Binary files a/docs/elasticsearch/images/data-ingestion-image22.png and /dev/null differ
diff --git a/docs/elasticsearch/images/data-ingestion-image3.png b/docs/elasticsearch/images/data-ingestion-image3.png
deleted file mode 100644
index 0444b19af23..00000000000
Binary files a/docs/elasticsearch/images/data-ingestion-image3.png and /dev/null differ
diff --git a/docs/elasticsearch/images/data-ingestion-image4.png b/docs/elasticsearch/images/data-ingestion-image4.png
deleted file mode 100644
index 19e196733ab..00000000000
Binary files a/docs/elasticsearch/images/data-ingestion-image4.png and /dev/null differ
diff --git a/docs/elasticsearch/images/data-ingestion-image5.png b/docs/elasticsearch/images/data-ingestion-image5.png
deleted file mode 100644
index caf4e335757..00000000000
Binary files a/docs/elasticsearch/images/data-ingestion-image5.png and /dev/null differ
diff --git a/docs/elasticsearch/images/data-ingestion-image6.png b/docs/elasticsearch/images/data-ingestion-image6.png
deleted file mode 100644
index f5d87b753d2..00000000000
Binary files a/docs/elasticsearch/images/data-ingestion-image6.png and /dev/null differ
diff --git a/docs/elasticsearch/images/data-ingestion-image7.png b/docs/elasticsearch/images/data-ingestion-image7.png
deleted file mode 100644
index c47bffc7032..00000000000
Binary files a/docs/elasticsearch/images/data-ingestion-image7.png and /dev/null differ
diff --git a/docs/elasticsearch/images/data-ingestion-image8.png b/docs/elasticsearch/images/data-ingestion-image8.png
deleted file mode 100644
index f724b1af157..00000000000
Binary files a/docs/elasticsearch/images/data-ingestion-image8.png and /dev/null differ
diff --git a/docs/elasticsearch/images/data-ingestion-image9.png b/docs/elasticsearch/images/data-ingestion-image9.png
deleted file mode 100644
index 4a4bf3bef51..00000000000
Binary files a/docs/elasticsearch/images/data-ingestion-image9.png and /dev/null differ
diff --git a/docs/elasticsearch/images/general-clientappinstances.png b/docs/elasticsearch/images/general-clientappinstances.png
deleted file mode 100644
index 4d03f6f21c2..00000000000
Binary files a/docs/elasticsearch/images/general-clientappinstances.png and /dev/null differ
diff --git a/docs/elasticsearch/images/general-cluster1.png b/docs/elasticsearch/images/general-cluster1.png
deleted file mode 100644
index 6767a3a2b4e..00000000000
Binary files a/docs/elasticsearch/images/general-cluster1.png and /dev/null differ
diff --git a/docs/elasticsearch/images/general-cluster2.png b/docs/elasticsearch/images/general-cluster2.png
deleted file mode 100644
index 0d986bedfb5..00000000000
Binary files a/docs/elasticsearch/images/general-cluster2.png and /dev/null differ
diff --git a/docs/elasticsearch/images/general-developmentconfiguration.png b/docs/elasticsearch/images/general-developmentconfiguration.png
deleted file mode 100644
index bd3b2ca539c..00000000000
Binary files a/docs/elasticsearch/images/general-developmentconfiguration.png and /dev/null differ
diff --git a/docs/elasticsearch/images/general-startingpoint.png b/docs/elasticsearch/images/general-startingpoint.png
deleted file mode 100644
index 81494100d79..00000000000
Binary files a/docs/elasticsearch/images/general-startingpoint.png and /dev/null differ
diff --git a/docs/elasticsearch/images/general-threenodecluster.png b/docs/elasticsearch/images/general-threenodecluster.png
deleted file mode 100644
index 226bd0215c9..00000000000
Binary files a/docs/elasticsearch/images/general-threenodecluster.png and /dev/null differ
diff --git a/docs/elasticsearch/images/general-tribenode.png b/docs/elasticsearch/images/general-tribenode.png
deleted file mode 100644
index 7cdf3cc3428..00000000000
Binary files a/docs/elasticsearch/images/general-tribenode.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy10.png b/docs/elasticsearch/images/jmeter-deploy10.png
deleted file mode 100644
index 8c724172770..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy10.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy11.png b/docs/elasticsearch/images/jmeter-deploy11.png
deleted file mode 100644
index b02d566dfb9..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy11.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy12.png b/docs/elasticsearch/images/jmeter-deploy12.png
deleted file mode 100644
index cefba123422..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy12.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy13.png b/docs/elasticsearch/images/jmeter-deploy13.png
deleted file mode 100644
index 5d7494105b1..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy13.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy14.png b/docs/elasticsearch/images/jmeter-deploy14.png
deleted file mode 100644
index a8da78815d0..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy14.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy15.png b/docs/elasticsearch/images/jmeter-deploy15.png
deleted file mode 100644
index 4cf73bf5f0b..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy15.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy16.png b/docs/elasticsearch/images/jmeter-deploy16.png
deleted file mode 100644
index 7ae40cf588f..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy16.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy17.png b/docs/elasticsearch/images/jmeter-deploy17.png
deleted file mode 100644
index 34698852f42..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy17.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy18.png b/docs/elasticsearch/images/jmeter-deploy18.png
deleted file mode 100644
index 1f3698acbae..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy18.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy19.png b/docs/elasticsearch/images/jmeter-deploy19.png
deleted file mode 100644
index 71faf595f60..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy19.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy20.png b/docs/elasticsearch/images/jmeter-deploy20.png
deleted file mode 100644
index 826dfb60134..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy20.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy21.png b/docs/elasticsearch/images/jmeter-deploy21.png
deleted file mode 100644
index 75688517a5f..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy21.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy22.png b/docs/elasticsearch/images/jmeter-deploy22.png
deleted file mode 100644
index acb45f53de5..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy22.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy23.png b/docs/elasticsearch/images/jmeter-deploy23.png
deleted file mode 100644
index 49b7d1bea91..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy23.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy24.png b/docs/elasticsearch/images/jmeter-deploy24.png
deleted file mode 100644
index 0375bb5d522..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy24.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy25.png b/docs/elasticsearch/images/jmeter-deploy25.png
deleted file mode 100644
index 62a72829029..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy25.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy26.png b/docs/elasticsearch/images/jmeter-deploy26.png
deleted file mode 100644
index f06e99fa12f..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy26.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy27.png b/docs/elasticsearch/images/jmeter-deploy27.png
deleted file mode 100644
index 5e3a618446d..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy27.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy28.png b/docs/elasticsearch/images/jmeter-deploy28.png
deleted file mode 100644
index d4c2218562d..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy28.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy29.png b/docs/elasticsearch/images/jmeter-deploy29.png
deleted file mode 100644
index 3b0ebbafccc..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy29.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy30.png b/docs/elasticsearch/images/jmeter-deploy30.png
deleted file mode 100644
index fe6f7b608ac..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy30.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy31.png b/docs/elasticsearch/images/jmeter-deploy31.png
deleted file mode 100644
index 2567bb13f67..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy31.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy32.png b/docs/elasticsearch/images/jmeter-deploy32.png
deleted file mode 100644
index ec9664db439..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy32.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy7.png b/docs/elasticsearch/images/jmeter-deploy7.png
deleted file mode 100644
index 2cde426b7b0..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy7.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy8.png b/docs/elasticsearch/images/jmeter-deploy8.png
deleted file mode 100644
index 4c5b2820bfc..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy8.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-deploy9.png b/docs/elasticsearch/images/jmeter-deploy9.png
deleted file mode 100644
index 47ced30e982..00000000000
Binary files a/docs/elasticsearch/images/jmeter-deploy9.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-testing1.png b/docs/elasticsearch/images/jmeter-testing1.png
deleted file mode 100644
index 759692370a0..00000000000
Binary files a/docs/elasticsearch/images/jmeter-testing1.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-testing2.png b/docs/elasticsearch/images/jmeter-testing2.png
deleted file mode 100644
index 99b69049228..00000000000
Binary files a/docs/elasticsearch/images/jmeter-testing2.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-testing3.png b/docs/elasticsearch/images/jmeter-testing3.png
deleted file mode 100644
index 3c457ff4484..00000000000
Binary files a/docs/elasticsearch/images/jmeter-testing3.png and /dev/null differ
diff --git a/docs/elasticsearch/images/jmeter-testing4.png b/docs/elasticsearch/images/jmeter-testing4.png
deleted file mode 100644
index 4c7e067723a..00000000000
Binary files a/docs/elasticsearch/images/jmeter-testing4.png and /dev/null differ
diff --git a/docs/elasticsearch/images/performance-image17.png b/docs/elasticsearch/images/performance-image17.png
deleted file mode 100644
index 889bb597f1e..00000000000
Binary files a/docs/elasticsearch/images/performance-image17.png and /dev/null differ
diff --git a/docs/elasticsearch/images/performance-image18.png b/docs/elasticsearch/images/performance-image18.png
deleted file mode 100644
index bc7f3c7b3f1..00000000000
Binary files a/docs/elasticsearch/images/performance-image18.png and /dev/null differ
diff --git a/docs/elasticsearch/images/performance-image19.png b/docs/elasticsearch/images/performance-image19.png
deleted file mode 100644
index c31c56fe643..00000000000
Binary files a/docs/elasticsearch/images/performance-image19.png and /dev/null differ
diff --git a/docs/elasticsearch/images/performance-image20.png b/docs/elasticsearch/images/performance-image20.png
deleted file mode 100644
index ec1b24694cf..00000000000
Binary files a/docs/elasticsearch/images/performance-image20.png and /dev/null differ
diff --git a/docs/elasticsearch/images/performance-image21.png b/docs/elasticsearch/images/performance-image21.png
deleted file mode 100644
index 271e9a7da8c..00000000000
Binary files a/docs/elasticsearch/images/performance-image21.png and /dev/null differ
diff --git a/docs/elasticsearch/images/performance-image22.png b/docs/elasticsearch/images/performance-image22.png
deleted file mode 100644
index 3bc65d29016..00000000000
Binary files a/docs/elasticsearch/images/performance-image22.png and /dev/null differ
diff --git a/docs/elasticsearch/images/performance-structure.png b/docs/elasticsearch/images/performance-structure.png
deleted file mode 100644
index c0f1c94fc11..00000000000
Binary files a/docs/elasticsearch/images/performance-structure.png and /dev/null differ
diff --git a/docs/elasticsearch/images/performance-telnet-server.png b/docs/elasticsearch/images/performance-telnet-server.png
deleted file mode 100644
index 8d98aa0326d..00000000000
Binary files a/docs/elasticsearch/images/performance-telnet-server.png and /dev/null differ
diff --git a/docs/elasticsearch/images/performance-tests-image1.png b/docs/elasticsearch/images/performance-tests-image1.png
deleted file mode 100644
index 57bb8332c95..00000000000
Binary files a/docs/elasticsearch/images/performance-tests-image1.png and /dev/null differ
diff --git a/docs/elasticsearch/images/query-performance1.png b/docs/elasticsearch/images/query-performance1.png
deleted file mode 100644
index 60f783e391d..00000000000
Binary files a/docs/elasticsearch/images/query-performance1.png and /dev/null differ
diff --git a/docs/elasticsearch/images/query-performance10.png b/docs/elasticsearch/images/query-performance10.png
deleted file mode 100644
index 93ab6ab90c6..00000000000
Binary files a/docs/elasticsearch/images/query-performance10.png and /dev/null differ
diff --git a/docs/elasticsearch/images/query-performance11.png b/docs/elasticsearch/images/query-performance11.png
deleted file mode 100644
index 98724f20998..00000000000
Binary files a/docs/elasticsearch/images/query-performance11.png and /dev/null differ
diff --git a/docs/elasticsearch/images/query-performance12.png b/docs/elasticsearch/images/query-performance12.png
deleted file mode 100644
index 3ee940889e8..00000000000
Binary files a/docs/elasticsearch/images/query-performance12.png and /dev/null differ
diff --git a/docs/elasticsearch/images/query-performance13.png b/docs/elasticsearch/images/query-performance13.png
deleted file mode 100644
index 91cca55a7f1..00000000000
Binary files a/docs/elasticsearch/images/query-performance13.png and /dev/null differ
diff --git a/docs/elasticsearch/images/query-performance14.png b/docs/elasticsearch/images/query-performance14.png
deleted file mode 100644
index f3988e0a08a..00000000000
Binary files a/docs/elasticsearch/images/query-performance14.png and /dev/null differ
diff --git a/docs/elasticsearch/images/query-performance15.png b/docs/elasticsearch/images/query-performance15.png
deleted file mode 100644
index abe7126390f..00000000000
Binary files a/docs/elasticsearch/images/query-performance15.png and /dev/null differ
diff --git a/docs/elasticsearch/images/query-performance16.png b/docs/elasticsearch/images/query-performance16.png
deleted file mode 100644
index 6f2692c28a8..00000000000
Binary files a/docs/elasticsearch/images/query-performance16.png and /dev/null differ
diff --git a/docs/elasticsearch/images/query-performance17.png b/docs/elasticsearch/images/query-performance17.png
deleted file mode 100644
index ac4150927ad..00000000000
Binary files a/docs/elasticsearch/images/query-performance17.png and /dev/null differ
diff --git a/docs/elasticsearch/images/query-performance18.png b/docs/elasticsearch/images/query-performance18.png
deleted file mode 100644
index 551ab2ef67e..00000000000
Binary files a/docs/elasticsearch/images/query-performance18.png and /dev/null differ
diff --git a/docs/elasticsearch/images/query-performance19.png b/docs/elasticsearch/images/query-performance19.png
deleted file mode 100644
index 4c1a6123143..00000000000
Binary files a/docs/elasticsearch/images/query-performance19.png and /dev/null differ
diff --git a/docs/elasticsearch/images/query-performance2.png b/docs/elasticsearch/images/query-performance2.png
deleted file mode 100644
index abd44bbe47b..00000000000
Binary files a/docs/elasticsearch/images/query-performance2.png and /dev/null differ
diff --git a/docs/elasticsearch/images/query-performance3.png b/docs/elasticsearch/images/query-performance3.png
deleted file mode 100644
index a4be04dc35c..00000000000
Binary files a/docs/elasticsearch/images/query-performance3.png and /dev/null differ
diff --git a/docs/elasticsearch/images/query-performance4.png b/docs/elasticsearch/images/query-performance4.png
deleted file mode 100644
index 98724f20998..00000000000
Binary files a/docs/elasticsearch/images/query-performance4.png and /dev/null differ
diff --git a/docs/elasticsearch/images/query-performance5.png b/docs/elasticsearch/images/query-performance5.png
deleted file mode 100644
index 3ee940889e8..00000000000
Binary files a/docs/elasticsearch/images/query-performance5.png and /dev/null differ
diff --git a/docs/elasticsearch/images/query-performance6.png b/docs/elasticsearch/images/query-performance6.png
deleted file mode 100644
index f0a2ec6628a..00000000000
Binary files a/docs/elasticsearch/images/query-performance6.png and /dev/null differ
diff --git a/docs/elasticsearch/images/query-performance7.png b/docs/elasticsearch/images/query-performance7.png
deleted file mode 100644
index 3ead02d59c1..00000000000
Binary files a/docs/elasticsearch/images/query-performance7.png and /dev/null differ
diff --git a/docs/elasticsearch/images/query-performance8.png b/docs/elasticsearch/images/query-performance8.png
deleted file mode 100644
index 12814041e36..00000000000
Binary files a/docs/elasticsearch/images/query-performance8.png and /dev/null differ
diff --git a/docs/elasticsearch/images/query-performance9.png b/docs/elasticsearch/images/query-performance9.png
deleted file mode 100644
index a1f2c42bd45..00000000000
Binary files a/docs/elasticsearch/images/query-performance9.png and /dev/null differ
diff --git a/docs/elasticsearch/images/resilience-testing1.png b/docs/elasticsearch/images/resilience-testing1.png
deleted file mode 100644
index 782674263ef..00000000000
Binary files a/docs/elasticsearch/images/resilience-testing1.png and /dev/null differ
diff --git a/docs/elasticsearch/images/resilience-testing2.png b/docs/elasticsearch/images/resilience-testing2.png
deleted file mode 100644
index 22cdb05b5a0..00000000000
Binary files a/docs/elasticsearch/images/resilience-testing2.png and /dev/null differ
diff --git a/docs/elasticsearch/index.md b/docs/elasticsearch/index.md
deleted file mode 100644
index 9547bc236f6..00000000000
--- a/docs/elasticsearch/index.md
+++ /dev/null
@@ -1,887 +0,0 @@
----
-title: Get started with Elasticsearch on Azure
-description: 'How to install, configure, and run Elasticsearch on Azure.'
-author: dragon119
-ms.date: 09/22/2016
-ms.custom: elasticsearch
-
-pnp.series.title: Elasticsearch on Azure
-pnp.series.next: data-ingestion-performance
----
-# Run Elasticsearch on Azure
-
-This article describes how to implement an Elasticsearch cluster using Azure. It touches on best
-practices for deploying an Elasticsearch cluster, concentrating on the various functional performance
-and management requirements of your system, and considering how your requirements should drive the
-configuration and topology that you select.
-
-> [!NOTE]
-> This guidance assumes some basic familiarity with [Elasticsearch][Elasticsearch].
->
->
-
-## The structure of Elasticsearch
-Elasticsearch is a document database highly optimized to act as a search engine. Documents are
-serialized in JSON format. Data is held in indexes, implemented by using [Apache Lucene][Apache Lucene], although
-the details are abstracted from view and it is not necessary to fully understand Lucene in order to use
-Elasticsearch.
-
-### Clusters, nodes, indexes, and shards
-Elasticsearch implements a clustered architecture that uses sharding to distribute data across
-multiple nodes, and replication to provide high availability. Documents are stored in indexes. The user can specify which fields in a document are used to uniquely identify it within an index, or the system can generate a key field and values automatically. The index is used to physically organize documents and is the principal means for locating documents.
-
-An index contains a set of shards. Documents are evenly dispersed across shards using a hashing
-mechanism based on the index key values and the number of shards in the index.
-
-Indexes can be replicated. In this case each shard in the index is copied. Elasticsearch ensures that
-each original shard for an index (referred to as a “primary shard”) and its replica always reside on
-different nodes. When a document is added or modified, all write operations are performed on the primary shard first and
-then at each replica.
-
-The figure below shows the essential aspects of an Elasticsearch cluster containing three nodes. An
-index has been created that consists of two primary shards with two replicas for each shard (six shards
-in all).
-
-![Three-node cluster](./images/general-cluster1.png)
-
-*A simple Elasticsearch cluster containing two primary nodes and two sets of replicas*
-
-In this cluster, primary shard 1 and primary shard 2 are located on separate nodes to help balance the
-load across them. The replicas are similarly distributed. If a single node fails, the remaining nodes
-have sufficient information to enable the system to continue functioning. If necessary, Elasticsearch
-will promote a replica shard to become a primary shard if the corresponding primary shard is unavailable.
-
-### Node roles
-The nodes in an Elasticsearch cluster can perform the following roles:
-
-* A **data node** which can hold one or more shards that contain index data.
-* A **client node** that does not hold index data but that handles incoming requests made by client
- applications to the appropriate data node.
-* A **master node** that does not hold index data but that performs cluster management operations, such
- as maintaining and distributing routing information around the cluster (the list of which nodes contain
- which shards), determining which nodes are available, relocating shards as nodes appear and disappear,
- and coordinating recovery after node failure. Multiple nodes can be configured as masters, but only
- one will actually be elected to perform the master functions. If this node fails, another election
- takes place and one of the other eligible master nodes will be elected and take over.
-
-> [!NOTE]
-> The elected master node is critical to the well-being of the cluster. The other nodes
-> ping it regularly to ensure that it is still available. If the elected master node is also acting as
-> a data node, there is a chance that the node can become busy and fail to respond to these pings. In
-> this situation, the master is deemed to have failed and one of the other master nodes is elected in its
-> place.
->
->
-
- The figure below shows a topology containing a mixture of dedicated master, client, and data nodes in an Elasticsearch cluster.
-
-![Cluster with dedicated master, client, and data nodes](./images/general-cluster2.png)
-
-*An Elasticsearch cluster showing different types of nodes*
-
-### Costs and benefits of using client nodes
-When an application submits a query to an Elasticsearch cluster, the node to which the application
-connects is responsible for directing the query process. The node forwards the request to each data node
-and gathers the results, returning the accumulated information to the application. If a query involves
-aggregations and other computations, the node to which the application connects performs the necessary
-operations after retrieving the data from each of the other nodes. This scatter/gather process can
-consume considerable processing and memory resources.
-
-Using dedicated client nodes to perform these tasks allows data nodes to focus on managing and storing
-data. The result is that many scenarios that involve complex queries and aggregations can benefit from
-using dedicated client nodes. However, the impact of using dedicated client nodes will likely vary
-depending on your scenario, workload, and cluster size.
-
-> [!NOTE]
-> Refer to [Tuning Data Aggregation and Query Performance for Elasticsearch on Azure][Tuning Data Aggregation and Query Performance for Elasticsearch on Azure] for more information on the tuning process.
->
->
-
-### Connecting to a cluster
-Elasticsearch exposes a series of REST APIs for building client applications and sending requests to a
-cluster. If you are developing applications using the .NET Framework, two higher levels APIs are
-available – [Elasticsearch.Net & NEST][Elasticsearch.Net & NEST].
-
-If you are building client applications using Java, you can use the [Node Client API][Node Client API] to create client
-nodes dynamically and add them to the cluster. Creating client nodes dynamically is convenient if your
-system uses a relatively small number of long-lived connections. Client nodes created by using the Node
-API are provided with the cluster routing map (the details of which nodes contain which shards) by the
-master node. This information enables the Java application to connect directly to the appropriate nodes
-when indexing or querying data, reducing the number of hops that may be necessary when using other APIs.
-
-The cost of this approach is the overhead of enrolling the client node into the cluster. If a large
-number of client nodes appear and disappear quickly, the impact of maintaining and distributing the
-cluster routing map can become significant.
-
-The figure below shows a configuration that uses a load balancer to route requests to a set of client nodes, although the same strategy can be used to connect directly to data nodes if client nodes are not used.
-
-![Cluster with load balancer](./images/general-clientappinstances.png)
-
-*Client application instances connecting to an Elasticsearch cluster through the Azure Load Balancer*
-
-> [!NOTE]
-> You can use the [Azure Load Balancer][Azure Load Balancer] to expose the cluster to the public Internet, or
-> you can use an [internal load balancer][internal load balancer] if the client applications and cluster are contained entirely
-> within the same private virtual network (VNet).
->
->
-
-### Node discovery
-Elasticsearch is based on peer-to-peer communications, so discovering other nodes in a cluster is an
-important part in the lifecycle of a node. Node discovery enables new data nodes to be added dynamically
-to a cluster, which in turn allows the cluster to scale out transparently. Additionally, if a data node
-fails to respond to communications requests from other nodes, a master node can decide that the data node
-has failed and take the necessary steps to reallocate the shards that it was holding to other operational
-data nodes.
-
-Elasticsearch node discovery is handled by using a discovery module. The discovery module is a plugin
-that can be switched to use a different discovery mechanism. The default discovery module ([Zen][Zen])
-causes a node to issue ping requests to find other nodes on the same network. If other nodes respond,
-they gossip to exchange information. A master node can then distribute shards to the new node (if it is a
-data node) and rebalance the cluster. The Zen discovery module also handles the master election process and the protocol for detecting node failure.
-
-Additionally, if you are running Elasticsearch nodes as Azure virtual
-machines (VMs), multicast messaging is not supported. For this reasons, you should configure the Zen
-discovery to use unicast messaging and provide a list of valid contact nodes in the elasticsearch.yml
-configuration file.
-
-If you are hosting an Elasticsearch cluster within an Azure virtual network, you can specify that the private
-DHCP-assigned IP addressed given to each VM in the cluster should remain allocated (static). You can
-configure Zen discovery unicast messaging using these static IP addresses. If you are using VMs with
-dynamic IP addresses, keep in mind that if a VM stops and restarts it could be allocated a new IP address
-making discovery more difficult. To handle this scenario, you can swap the Zen discovery module for the
-[Azure Cloud Plugin][Azure Cloud Plugin]. This plugin uses the Azure API to implement the discovery mechanism, which is
-based on Azure subscription information.
-
-> [!NOTE]
-> The current version of the Azure Cloud Plugin requires you to install the management
-> certificate for your Azure subscription in the Java keystore on the Elasticsearch node, and provide the
-> location and credentials for accessing the keystore in the elasticsearch.yml file. This file is held in
-> clear text, so it is vitally important that you ensure this file is only accessible by the account
-> running the Elasticsearch service.
->
-> Additionally, this approach may not be compatible with Azure Resource Manager deployments. For
-> these reasons, it is recommended that you use static IP addresses for master nodes, and use these nodes
-> to implement Zen discovery unicast messaging across the cluster. In the following configuration
-> (taken from the elasticsearch.yml file for a sample data node), the host IP addresses reference
-> master nodes in the cluster:
->
->
-
-```yaml
-discovery.zen.ping.multicast.enabled: false
-discovery.zen.ping.unicast.hosts: ["10.0.0.10","10.0.0.11","10.0.0.12"]
-```
-
-## General system guidelines
-Elasticsearch can run on a variety of computers, ranging from a single laptop to a cluster of high-end
-servers. However, the more resources in terms of memory, computing power, and fast disks that are
-available the better the performance. The following sections summarize the basic hardware and software
-requirements for running Elasticsearch.
-
-### Memory requirements
-Elasticsearch attempts to store data in-memory for speed. A production server hosting a node for a
-typical enterprise or moderate-sized commercial deployment on Azure should have between 14GB and 28GB of
-RAM (D3 or D4 VMs). **Spread the load across more nodes rather than creating nodes with more memory**
-(Experiments have shown that using larger nodes with more memory can cause extended recovery times in the
-event of a failure.) However, although creating clusters with a very large number of small nodes can
-increase availability and throughput, it also escalates the effort involved in managing and maintaining
-such a system.
-
-**Allocate 50% of the available memory on a server to the Elasticsearch heap**. If you are using Linux set
-the ES_HEAP_SIZE environment variable before running Elasticsearch. Alternatively, if you are using
-Windows or Linux, you can specify memory size in the `Xmx` and `Xms` parameters when you start
-Elasticseach. Set both of these parameters to the same value to avoid the Java Virtual Machine (JVM)
-resizing the heap at runtime. However, **do not allocate more than 30GB**. Use the remaining memory for the operating system file cache.
-
-> [!NOTE]
-> Elasticsearch utilizes the Lucene library to create and manage indexes. Lucene structures
-> use a disk-based format, and caching these structures in the file system cache will greatly enhance
-> performance.
->
->
-
-Note that the maximum optimal heap size for Java on a 64 bit machine is just above 30GB. Above this size
-Java switches to using an extended mechanism for referencing objects on the heap, which increases the
-memory requirements for each object and reduces performance.
-
-The default Java garbage collector (Concurrent Mark and Sweep) may also perform sub-optimally if the heap
-size is above 30GB. It is not currently recommended to switch to a different garbage collector as
-Elasticsearch and Lucene have only been tested against the default.
-
-Do not overcommit memory as swapping main memory to disk will severely impact performance. If possible,
-disable swapping completely (the details depend on the operating system). If this is not possible then
-enable the *mlockall* setting in the Elasticsearch configuration file (elasticsearch.yml) as follows:
-
-```yaml
-bootstrap.mlockall: true
-```
-
-This configuration setting causes the JVM to lock its memory and prevents it being swapped out by the operating system.
-
-### Disk and file system requirements
-Use data disks backed by premium storage for storing shards. Disks should be sized to hold the maximum
-amount of data anticipated in your shards, although it is possible to add further disks later. You can
-extend a shard across multiple disks on a node.
-
-> [!NOTE]
-> Elasticsearch compresses the data for stored fields by using the LZ4 algorithm, and in
-> Elasticsearch 2.0 onwards you can change the compression type. You can switch the compression algorithm
-> to DEFLATE as used by the *zip* and *gzip* utilities. This compression technique can be more resource
-> intensive, but you should consider using it for archived log data. This approach
-> can help to reduce index size.
->
->
-
-It is not essential that all nodes in a cluster have the same disk layout and capacity. However, a node
-with a very large disk capacity compared to other nodes in a cluster will attract more data and will
-require increased processing power to handle this data. Consequently the node can become "hot" compared
-to other nodes, and this can, in turn, affect performance.
-
-If possible, use RAID 0 (striping). Other forms of RAID that implement parity and mirroring are
-unnecessary as Elasticsearch provides its own high availablility solution in the form of replicas.
-
-> [!NOTE]
-> Prior to Elasticsearch 2.0.0, you could also implement striping at the software level by
-> specifying multiple directories in the *path.data* configuration setting. In Elasticsearch 2.0.0, this
-> form of striping is no longer supported. Instead, different shards may be allocated to different paths,
-> but all of the files in a single shard will be written to the same path. If you require striping, you
-> should stripe data at the operating system or hardware level.
->
->
-
-To maximize storage throughput, each **VM should have a dedicated premium storage account**.
-
-The Lucene library can use a large number of files to store index data, and Elasticsearch can open a
-significant number of sockets for communicating between nodes and with clients. Make sure that the
-operating system is configured to support an adequate number of open file descriptors (up to 64000 if
-sufficient memory is available). Note that the default configuration for many Linux distributions limits
-the number of open file descriptors to 1024, which is much too small.
-
-Elasticsearch uses a combination of memory mapped (mmap) I/O and Java New I/O (NIO) to optimize
-concurrent access to data files and indexes. If you are using Linux, you should configure the operating
-system to ensure that there is sufficient virtual memory available with space for 256K memory map areas.
-
-> [!NOTE]
-> Many Linux distributions default to using the completely fair queuing (CFQ) scheduler when
-> arranging to write data to disk. This scheduler is not optimized for SSDs. Consider reconfiguring the
-> operating system to use either the NOOP scheduler or the deadline scheduler, both of which are more
-> effective for SSDs.
->
->
-
-### CPU requirements
-Azure VMs are available in a variety of CPU configurations, supporting between 1 and 32 cores. For a
-data node, a good starting point is a standard DS-series VM, and select either the DS3 (4
-cores) or D4 (8 cores) SKUs. The DS3 also provides 14GB of RAM, while the DS4 includes 28GB.
-
-The GS-series (for premium storage) and G-series (for standard storage) use Xeon E5 V3 processors which may be useful for workloads that are heavily compute-intensive, such as large-scale aggregations. For the latest information, visit [Sizes for virtual machines][Sizes for virtual machines].
-
-### Network requirements
-Elasticsearch requires a network bandwidth of between 1 and 10Gbps, depending on the size and volatility
-of the clusters that it implements. Elasticsearch migrates shards between nodes as more nodes are added
-to a cluster. Elasticsearch assumes that the communication time between all nodes is roughly equivalent
-and does not consider the relative locations of shards held on those nodes. Additionally, replication can
-incur significant network I/O between shards. For these reasons, **avoid creating clusters on nodes that
-are in different regions**.
-
-### Software Requirements
-You can run Elasticsearch on Windows or on Linux. The Elasticsearch service is deployed as a Java jar
-library and has dependencies on other Java libraries that are included in the Elasticsearch package. You
-must install the Java 7 (update 55 or later) or Java 8 (update 20 or later) JVM to run Elasticsearch.
-
-> [!NOTE]
-> Other than the *Xmx* and *Xms* memory parameters (specified as command line options to
-> the Elasticsearch engine – see [Memory requirements][Memory requirements]) do not modify the default JVM configuration
-> settings. Elasticsearch has been designed using the defaults; changing them can cause Elasticsearch to
-> become detuned and perform poorly.
->
->
-
-### Deploying Elasticsearch on Azure
-Although it is not difficult to deploy a single instance of Elasticsearch, creating a number of nodes and
-installing and configuring Elasticsearch on each one can be a time consuming and error-prone process. If
-you are considering running Elasticsearch on Azure VMs, you have three options that can help to reduce the
-chances of errors.
-
-* Using the Azure Resource Manager [template](https://azure.microsoft.com/marketplace/partners/elastic/elasticsearchelasticsearch/) in Azure marketplace. This template is created by Elastic. It allows you to add commercial enhancements such as Shield, Marvel, Watcher, and so on.
-* Using the Azure quickstart [template](https://github.com/Azure/azure-quickstart-templates/tree/master/elasticsearch) to build the cluster. This template can create a cluster based on Windows Server 2012 or Ubuntu Linux 14.0.4. It allows you to use experimental features such as Azure File Storage. This template is used for the research and testing tasks in this document.
-* Using scripts that can be automated or run unattended. Scripts that can create and deploy an Elasticsearch cluster are available on the [GitHub repository][elasticsearch-scripts]
-
-## Cluster and node sizing and scalability
-Elasticsearch enables a number of deployment topologies, designed to support differing requirements and levels of scale. This section discusses some common topologies, and describes the considerations for implementing clusters based on these topologies.
-
-### Elasticsearch topologies
-The figure below illustrates a starting point for designing an Elasticsearch topology for Azure:
-
-![Suggested starting point for building an Elasticsearch cluster with Azure](./images/general-startingpoint.png)
-
-*Suggested starting point for building an Elasticsearch cluster with Azure*
-
-This topology contains six data nodes together with three client nodes and three master nodes (only one master node is elected, the other two are available for election should the elected master fail.) Each node is implemented as a separate VM. Azure web applications are directed to client nodes via a load balancer.
-
-In this example, all nodes and the web applications reside in the same virtual network which effectively isolates them from the outside world. If the cluster needs to be available externally (possibly as part of a hybrid solution incorporating on-premises clients), then you can use the Azure Load Balancer to provide a public IP address, but you will need to take additional security precautions to prevent unauthorized access to the cluster.
-
-The optional "Jump Box" is a VM that is only available to administrators. This VM has a network connection to the virtual network, but also an outward facing network connection to permit administrator logon from an external network (this logon should be protected by using a strong password or certificate). An administrator can log on to the Jump Box, and then connect from there directly to any of the nodes in the cluster.
-
-Alternative approaches include using a site-to-site VPN between an organization and the virtual network, or using [ExpressRoute][ExpressRoute] circuits to connect to the virtual network. These mechanisms permit administrative access to the cluster without exposing the cluster to the public internet.
-
-To maintain VM availability, the data nodes are grouped into the same Azure availability set. Similarly, the client nodes are held in another availability set and the master nodes are stored in a third availability set.
-
-This topology is relatively easy to scale out, simply add more nodes of the appropriate type and ensure that they are configured with the same cluster name in the elasticsearch.yml file. Client nodes also need to be added to the backend pool for the Azure load balancer.
-
-**Geo-locating clusters**
-
-**Don’t spread nodes in a cluster across regions as this can impact the performance of inter-node communication** (see [Network requirements][Network requirements]). Geo-locating data close to users in different regions requires creating multiple clusters. In this situation, you need to consider how (or even whether) to synchronize clusters. Possible solutions include:
-
-[Tribe nodes][Tribe nodes] are similar to a client node except that it can participate in multiple Elasticsearch
-clusters and view them all as one big cluster. Data is still managed locally by each cluster (updates are
-not propagated across cluster boundaries), but all data is visible. A tribe node can query, create, and
-manage documents in any cluster.
-
-The primary restrictions are that a tribe node cannot be used to create a new index, and index names must
-be unique across all clusters. Therefore it is important that you consider how indexes will be named when
-you design clusters intended to be accessed from tribe nodes.
-
-Using this mechanism, each cluster can contain the data that is most likely to be accessed by local
-client applications, but these clients can still access and modify remote data although with possible
-extended latency. The figure below shows an example of this topology. The tribe node in Cluster 1 is
-highlighted; the other clusters can also have tribe nodes although these are not shown on the diagram:
-
-![A client application accessing multiple clusters through a tribe node](./images/general-tribenode.png)
-
-*A client application accessing multiple clusters through a tribe node*
-
-In this example, the client application connects to the tribe node in Cluster 1 (co-located in the same region), but this node is configured to be able to access Cluster 2 and Cluster 3, which might be located in different regions. The client application can send requests that retrieve or modify data in any of the clusters.
-
-> [!NOTE]
-> Tribe nodes require multicast discovery to connect to clusters, which may present a
-> security concern. See the section [Node discovery][Node discovery] for more details.
->
->
-
-* Implementing geo-replication between clusters. In this approach, changes made at each cluster are
- propagated in near real-time to clusters located in other data centers. Third-party plugins are available
- for Elasticsearch that support this functionality, such as the [PubNub Changes Plugin][PubNub Changes Plugin].
-* Using the [Elasticsearch Snapshot and Restore module][Elasticsearch Snapshot and Restore module]. If the data is very slow-moving and is
- modified only by a single cluster, you can consider using snapshots to take a periodic copy of the data
- and then restore these snapshots in other clusters (snapshots can be stored in Azure Blob Storage if you
- have installed the [Azure Cloud Plugin][Azure Cloud Plugin]). However, this solution does not work well for rapidly
- changing data or if data can be changed in more than one cluster.
-
-**Small-scale topologies**
-
-Large-scale topologies comprising clusters of dedicated master, client, and data nodes might not be
-appropriate for every scenario. If you are building a small-scale production or development system,
-consider the 3-node cluster shown in the figure below.
-
-Client applications connect directly to any available data node in the cluster. The cluster contains
-three shards labelled P1-P3 (to allow for growth) plus replicas labelled R1-R3. Using three nodes allows
-Elasticsearch to distribute the shards and replicas so that if any single node fails no data will be lost.
-
-![A 3-node cluster with 3 shards and replicas](./images/general-threenodecluster.png)
-
-*A 3-node cluster with 3 shards and replicas*
-
-If you are running a development installation on a standalone machine you can configure a cluster with a
-single node that acts as master, client, and data storage. Alternatively, you can start multiple nodes
-running as a cluster on the same computer by starting more than one instance of Elasticsearch. The figure
-below shows an example.
-
-![Development configuration](./images/general-developmentconfiguration.png)
-
-*A development configuration running multiple Elasticsearch nodes on the same machine*
-
-Note that neither of these standalone configurations are recommended for a production environment as
-they can cause contention unless your development machine has a significant amount of memory and several
-fast disks. Additionally, they do not provide any high availability guarantees. If the machine fails, all
-nodes are lost.
-
-### Scaling a cluster and data nodes
-Elasticsearch can scale in two dimensions: vertically (using bigger, more powerful machines) and
-horizontally (spreading the load across machines).
-
-**Scaling Elasticsearch data nodes vertically**
-
-If you are hosting an Elasticsearch cluster by using Azure VMs, each node can correspond to a VM. The
-limit of vertical scalability for a node is largely governed by the SKU of the VM and the overall
-restrictions applied to individual storage accounts and Azure subscriptions.
-
-The page [Azure subscription and service limits, quotas, and constraints](/azure/azure-subscription-service-limits/)
-describes these limits in detail, but as far as building an Elasticsearch cluster is concerned, the items
-in the following list are the most pertinent.
-
-* Each storage account is restricted to 20,000 IOPS. Each VM in the cluster should leverage a
- dedicated (preferably premium) storage account.
-* The number of data nodes in a virtual network. If you are not using the Azure Resource Manager, there is a
- limit of 2048 VM instances per virtual network. While this should prove sufficient for many cases, if you have a
- very large configuration with thousands of nodes this could be a limitation.
-* Number of storage accounts per subscription per region. You can create up to 100 storage accounts per
- Azure subscription in each region. Storage accounts are used to hold virtual disks, and each storage
- account has a limit of 500TB of space.
-* Number of cores per subscription. The default limit is 20 cores per subscription, but this can be increased up to 10,000 cores by requesting a limit increase through a support ticket.
-* The amount of memory per VM size. Smaller size VMs have limited amounts of memory available (D1
- machines have 3.5GB, and D2 machines have 7GB). These machines might not be suitable for scenarios that
- require Elasticsearch to cache significant amounts of data to achieve good performance (aggregating data,
- or analyzing a large number of documents during data ingestion, for example).
-* The maximum number of disks per VM size. This restriction can limit the size and performance of a
- cluster. Fewer disks means that less data can be held, and performance can be reduced by having fewer
- disks available for striping.
-* The number of update domains / fault domains per availability set. If you create VMs using the Azure Resource Manager,
- each availability set can be allocated up to 3 fault domains and 20 update domains. This limitation can
- impact the resilience of a large cluster that is subjected to frequent rolling updates.
-
-Additionally, you should probably not consider using VMs with more than 64GB of memory. As described in
-the section [Memory requirements][Memory requirements], you should not allocate more than 30GB of RAM on each VM to the JVM
-and allow the operating system to utilize the remaining memory for I/O buffering.
-
-With these restrictions in mind, you should always spread the virtual disks for the VMs in a cluster
-across storage accounts to reduce the chances of I/O throttling. In a very large cluster, you may need to
-redesign your logical infrastructure and split it into separate functional partitions. For example, you
-might need to split the cluster across subscriptions, although this process can lead to further
-complications because of the need to connect virtual networks.
-
-**Scaling an Elasticsearch cluster horizontally**
-
-Internally within Elasticsearch, the limit of horizontal scalability is determined by the number of
-shards defined for each index. Initially, many shards can be allocated to the same node in a cluster, but
-as the volume of data grows additional nodes can be added and shards can be distributed across these
-nodes. In theory, only when the number of nodes reaches the number of shards will the system cease to
-scale horizontally.
-
-As with vertical scaling, there are some issues that you should consider when contemplating implementing
-horizontal scaling, including:
-
-* The maximum number of VMs that you can connect in an Azure virtual network. This can limit the horizontal
- scalability for a very large cluster. You can create a cluster of nodes that spans more than one virtual network to
- circumvent this limit, but this approach can lead to reduced performance due to the lack of locality of
- each node with its peers.
-* The number of disks per VM Size. Different series and SKUs support different numbers of attached disks.
- Additionally, you can also consider using the ephemeral storage included with the VM to provide a limited
- amount of faster data storage, although there are resiliency and recovery implications that you should
- consider (see [Configuring resilience and recovery on Elasticsearch on Azure][elasticsearch-resilience-recovery] for
- more information). The D-series, DS-series, Dv2-series, and GS-series of VMs use SSDs for ephemeral
- storage.
-
-You could consider using [Virtual Machine Scale Sets][vmss] to start and stop VMs as demands dictates. However, this approach might not be
-appropriate for an Elasticsearch cluster for the following reasons:
-
-* This approach is best suited for stateless VMs. Each time you add or remove a node from an
- Elasticsearch cluster, shards are reallocated to balance the load, and this process can generate
- considerable volumes of network traffic and disk I/O and can severely impact data ingestion rates. You
- must assess whether this overhead is worth the benefit of the additional processing and memory resources
- that become available by dynamically starting more VMs.
-* VM startup does not happen instantaneously, and it may take several minutes before additional VMs
- become available or they are shut down. Scaling in this way should only be used to handle sustained
- changes in demand.
-* After scaling out, do you actually need to consider scaling back? Removing a VM from an Elasticsearch
- cluster can be a resource intensive process requiring that Elasticsearch recovers the shards and replicas
- that are located on that VM and recreates them on one or more of the remaining nodes. Removing several
- VMs at the same time could compromise the integrity of the cluster, making recovery difficult. Furthermore,
- many Elasticsearch implementations grow over time, but the nature of the data is such that it tends not
- to shrink in volume. It is possible to delete documents manually, and documents can also be configured
- with a TTL (time to live) after which they expire and get removed, but in most cases it is likely that
- the space previously allocated will be quickly reused by new or modified documents. Fragmentation within
- an index might occur when documents are removed or changed, in which case you can use the Elasticsearch
- HTTP [Optimize][Optimize] API (Elasticsearch 2.0.0 and earlier) or the [Force Merge][Force Merge] API (Elasticsearch 2.1.0
- and later) to perform defragmentation.
-
-### Determining the number of shards for an index
-The number of nodes in a cluster can vary over time, but the number of shards in an index is fixed once
-the index has been created. To add or remove shards requires reindexing the data – a process of creating
-a new index with the required number of shards and then copying the data from the old index to the new
-(you can use aliases to insulate users from the fact that data has been reindexed – see [Tuning data aggregation and query performance for Elasticsearch on Azure][Tuning data aggregation and query performance for Elasticsearch on Azure] for more details).
-Therefore, it is important to determine the number of shards that you are likely to require in advance of creating the first index in your cluster. You can perform the following steps to establish this number:
-
-* Create a single-node cluster using the same hardware configuration that you intend to deploy in
- production.
-* Create an index that matches the structure that you plan to use in production. Give this index a single
- shard and no replicas.
-* Add a specific quantity of realistic production data to the index.
-* Perform typical queries, aggregations, and other workloads against the index and measure the throughput
- and response time.
-* If the throughput and response time are within acceptable limits, then repeat the process from step 3
- (add more data).
-* When you appear to have reached the capacity of the shard (response times and throughput start becoming
- unacceptable), make a note of the volume of documents.
-* Extrapolate from the capacity of a single shard to the anticipated number of documents in production to
- calculate the required number of shards (you should include some margin of error in these calculations as
- extrapolation is not a precise science).
-
-> [!NOTE]
-> Remember that each shard is implemented as a Lucene index that consumes memory, CPU power,
-> and file handles. The more shards you have, the more of these resources you will require.
->
->
-
-Additionally, creating more shards may increase scalability (depending on your workloads and scenario)
-and can increase data ingestion throughput, but it might reduce the efficiency of many queries. By
-default, a query will interrogate every shard used by an index (you can use [custom routing][custom routing] to modify
-this behavior if you know which shards the data you require is located on).
-
-Following this process can only generate an estimate for the number of shards, and the volume of
-documents expected in production might not be known. In this case, you should determine the initial
-volume (as above) and the predicted growth rate. Create an appropriate number of shards that can handle
-the growth of data for the period until you are willing to reindex the database.
-
-Other strategies used for scenarios such as event management and logging include using rolling indexes.
-Create a new index for the data ingested each day and access this index through an alias that is switched
-daily to point to the most recent index. This approach enables you to more easily age-out old data (you
-can delete indexes containing information that is no longer required) and keeps the volume of data
-manageable.
-
-Keep in mind that the number of nodes does not have to match the number of shards. For example, if you
-create 50 shards, you can spread them across 10 nodes initially, and then add more nodes to scale the
-system out as the volume of work increases. Avoid creating an exceptionally large number of shards on a
-small number of nodes (1000 shards spread across 2 nodes, for example). Although the system could
-theoretically scale to 1000 nodes with this configuration, running 500 shards on a single node risks
-crippling the performance of the node.
-
-> [!NOTE]
-> For systems that are data-ingestion heavy, consider using a prime number of shards. The
-> default algorithm that Elasticsearch uses for routing documents to shards produces a more even spread
-> in this case.
->
->
-
-### Security
-By default, Elasticsearch implements minimal security and does not provide any means of authentication
-and authorization. These aspects require configuring the underlying operating system and network, and
-using plugins and third-party utilities. Examples include [Shield][Shield], and [Search Guard][Search Guard].
-
-> [!NOTE]
-> Shield is a plugin provided by Elastic for user authentication, data encryption,
-> role-based access control, IP filtering, and auditing. It may be necessary to configure the underlying
-> operating system to implement further security measures, such as disk encryption.
->
->
-
-In a production system, you should consider how to:
-
-* Prevent unauthorized access to the cluster.
-* Identify and authenticate users.
-* Authorize the operations that authenticated users can perform.
-* Protect the cluster from rogue or damaging operations.
-* Protect the data from unauthorized access.
-* Meet regulatory requirements for commercial data security (if appropriate).
-
-### Securing access to the cluster
-Elasticsearch is a network service. The nodes in an Elasticsearch cluster listen for incoming client requests using HTTP, and communicate with each other using a TCP channel. You should take steps to prevent unauthorized clients or services from being able to send requests over both the HTTP and TCP paths. Consider the following items.
-
-* Define network security groups to limit the inbound and outbound network traffic for a virtual network or VM to
- specific ports only.
-* Change the default ports used for client web access (9200) and programmatic network access (9300). Use
- a firewall to protect each node from malicious Internet traffic.
-* Depending on the location and connectivity of clients, place the cluster on a private subnet with no
- direct access to the Internet. If the cluster must be exposed outside the subnet, route all requests
- through a bastion server or proxy sufficiently hardened to protect the cluster.
-
-If you must provide direct access to nodes, use an [nginx](http://nginx.org/en/) proxy server configure and
-HTTPS authentication.
-
-> [!NOTE]
-> Using a proxy server such as nginx, you can also restrict access to functionality. For
-> example, you can configure nginx to only allow requests to the \_search endpoint if you need to prevent
-> clients from performing other operations.
->
->
-
-If you require more comprehensive network access security, use the Shield or Search Guard plugins.
-
-### Identifying and authenticating users
-All requests made by clients to the cluster should be authenticated. Additionally, you should prevent
-unauthorized nodes from joining the cluster as these can provide a backdoor into the system that bypasses
-authentication.
-
-Elasticsearch plugins are available that can perform different types of authentication, including:
-
-* **HTTP basic authentication**. Usernames and passwords are included in each request. All requests must be
- encrypted by using SSL/TLS or an equivalent level of protection.
-* **LDAP and Active Directory integration**. This approach requires that clients are assigned roles
- in LDAP or AD groups.
-* **Native authentication**. Uses identities defined within the Elasticsearch cluster itself.
-* **TLS authentication**. Use TLS authentication within a cluster to authenticate all nodes.
-* **IP filtering**. Use IP filtering to prevent clients from unauthorized subnets from connecting, and also preventing nodes from these subnets joining the cluster.
-
-### Authorizing client requests
-Authorization depends on the Elasticsearch plugin used to provide this service. For example, a plugin that provides basic authentication typically provides features that define the level of authentication, whereas a plugin that uses LDAP or AD will typically associate clients with roles, and then assign access rights to those roles. When using any plugin, you should consider the following points:
-
-* Do you need to restrict the operations that a client can perform? For example, should a client be able
- to monitor the status of the cluster, or create and delete indexes?
-* Should the client be restricted to specific indexes? This is useful in a multitenant : No hyphen per style guide.>>situation where
- tenants may be assigned their own specific set of indexes, and these indexes should be inaccessible to
- other tenants.
-* Should the client by able to read and write data to an index? A client may be able to perform searches
- that retrieve data using an index but must be prevented from adding or deleting data from that index, for
- example.
-
-Currently, most security plugins scope operations to the cluster or index level, and not to subsets of
-documents within indexes. This is for efficiency reasons. It is therefore not easy to limit requests to
-specific documents within a single index. If you require this level of granularity, save documents in
-separate indexes and use aliases that group indexes together.
-
-For example, in a personnel system, if user A requires access to all documents that contain information
-about employees in department X, user B requires access to all documents that contain information about
-employees in department Y, and user C requires access to all documents that contain information about
-employees in both departments, create two indexes (for department X and department Y), and an alias that
-references both indexes. Grant user A read access to the first index, grant user B read access to the
-second index, and grant user C read access to both indexes through the alias. For more information, see
-[Faking Index per User with Aliases][Faking Index per User with Aliases].
-
-### Protecting the cluster
-The cluster can become vulnerable to misuse if it is not protected carefully.
-
-**Disable dynamic query scripting in Elasticsearch** queries as they can lead to security vulnerabilities. Use native scripts in preference to query scripting; a native script is an Elasticsearch plugin written in Java and compiled into a JAR file.
-
-Dynamic query scripting is now disabled by default; do not reenable it unless you have a very good reason to do so.
-
-**Avoid exposing query-string searches to users** as this type of searching allows users to perform
-resource-intensive queries unhindered. These searches could severely impact the performance of the
-cluster and can render the system open to DOS attacks. Additionally, query-string searching can expose
-potentially private information.
-
-**Prevent operations from consuming a lot of memory** as these can cause out-of-memory exceptions
-resulting in Elasticsearch failing on a node. Long-running resource intensive operations can also be used
-to implement DOS attacks. Examples include:
-
-Avoid Search requests that attempt to load very large fields into memory (if a query sorts, scripts, or facets on these fields), such as:
-
-* Searches that query multiple indexes at the same time.
-* Searches that retrieve a large number of fields. These searches can exhaust memory by causing a vast amount of field data to be cached. By default, the field data cache is unlimited in size, but you can set the [indices.fielddata.cache.*](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-fielddata.html) properties in the elasticsearch.yml configuration file to limit the resources available. You can also configure the [field data circuit breaker][field data circuit breaker] to help prevent the cached data from a single field from exhausting memory, and the [request circuit breaker][request circuit breaker] to stop individual queries from monopolizing memory. The cost of setting these parameters is the increased likelihood of some queries failing or timing out.
-
-> [!NOTE]
-> Using Doc Values can reduce the memory requirements of indexes by saving field data to
-> disk rather than loading it into memory. This can help to reduce the chances of memory exhaustion on a
-> node but with a reduction in speed.
->
-> Elasticsearch always assumes that it has enough memory to perform its current workload. If
-> this is not the case, then the Elasticsearch service can crash. Elasticsearch provides endpoints that
-> return information about resource usage (the HTTP [cat APIs][cat APIs]), and you should monitor this
-> information carefully.
->
->
-
-**Waiting for too long to flush an in-progress memory segment**. This can exhaust in-memory buffer space.
-If necessary, [configure the translog][configure the translog] to reduce the thresholds at which data is flushed to disk.
-
-**Creating indexes with large amounts of metadata**. An index that contains documents with a large
-variation in field names can consume a lot of memory. For more information, see [Mapping Explosion][Mapping Explosion].
-
-The definition of a long-running or query intensive operation is highly scenario-specific. The workload typically expected by one cluster might have a completely different profile from the workload on another. Determining which operations are unacceptable requires significant research and testing of your applications.
-
-Be proactive, detect and stop malicious activities before they cause significant damage or data loss.
-Consider using a security monitoring and notification system that can quickly detect unusual patterns of
-data access and raise alerts when, for example, user login requests fail, unexpected nodes join or leave
-the cluster, or operations are taking longer than expected. Tools that can perform these tasks include
-Elasticearch [Watcher][Watcher].
-
-### Protecting the data
-You can protect data inflight by using SSL/TLS, but Elasticsearch does not provide any built-in form of
-data encryption for information that is stored on disk. Remember that this information is held in
-ordinary disk files, and any user with access to these files may be able to compromise the data that they
-hold, for example by copying them to their own cluster. Consider the following points:
-
-* Protect the files used by Elasticsearch to hold the data. Do not allow arbitrary read or write access
- to identities other than the Elasticsearch service.
-* Encrypt the data held in these files by using an encrypting file system.
-
-> [!NOTE]
-> Azure now supports disk encryption for Linux and Windows VMs. For more information, see
-> [Azure Disk Encryption for Windows and Linux IaaS VMs Preview](/azure/security/azure-security-disk-encryption/).
->
->
-
-### Meeting regulatory requirements
-Regulatory requirements are primarily concerned with auditing operations to maintain a history of events,
-and ensuring the privacy of these operations to help prevent them being monitored (and replayed) by an
-external agency. In particular, you should consider how to:
-
-* Track all requests (successful or not), and all attempts to access the system.
-* Encrypt communications made by clients to the cluster as well as node-to-node communications performed
- by the cluster. You should implement SSL/TLS for all cluster communications. Elasticsearch also supports
- pluggable ciphers if your organization has requirements distinct from those available through SSL/TLS.
-* Store all audit data securely. The volume of audit information can grow very rapidly and must be
- protected robustly to prevent tampering of audit information.
-* Safely archive audit data.
-
-### Monitoring
-Monitoring is important both at the operating system level and at the Elasticsearch level.
-
-You can perform monitoring at the operating system level using operating-system specific tools. Under
-Windows, this includes items such as Performance Monitor with the appropriate performance counters, while
-under Linux you can use tools such as *vmstat*, *iostat*, and *top*. The key items to monitor at the
-operating system level include CPU utilization, disk I/O volumes, disk I/O wait times, and network traffic.
-In a well-tuned Elasticsearch cluster, CPU utilization by the Elasticsearch process should be high, and
-disk I/O wait times should be minimal.
-
-At the software level, you should monitor the throughput and response times of requests, together with
-the details of requests that fail. Elasticsearch provides a number of APIs that you can use to examine
-the performance of different aspects of a cluster. The two most important APIs are *_cluster/health* and
-*_nodes/stats*. The *_cluster/health* API can be used to provide a snapshot of the overall health of the
-cluster, as well as providing detailed information for each index, as shown in the following example:
-
-`GET _cluster/health?level=indices`
-
-The example output shown below was generated using this API:
-
-```json
-{
- "cluster_name": "elasticsearch",
- "status": "green",
- "timed_out": false,
- "number_of_nodes": 6,
- "number_of_data_nodes": 3,
- "active_primary_shards": 10,
- "active_shards": 20,
- "relocating_shards": 0,
- "initializing_shards": 0,
- "unassigned_shards": 0,
- "delayed_unassigned_shards": 0,
- "number_of_pending_tasks": 0,
- "number_of_in_flight_fetch": 0,
- "indices": {
- "systwo": {
- "status": "green",
- "number_of_shards": 5,
- "number_of_replicas": 1,
- "active_primary_shards": 5,
- "active_shards": 10,
- "relocating_shards": 0,
- "initializing_shards": 0,
- "unassigned_shards": 0
- },
- "sysfour": {
- "status": "green",
- "number_of_shards": 5,
- "number_of_replicas": 1,
- "active_primary_shards": 5,
- "active_shards": 10,
- "relocating_shards": 0,
- "initializing_shards": 0,
- "unassigned_shards": 0
- }
- }
-}
-```
-
-This cluster contains two indexes named *systwo* and *sysfour*. Key statistics to monitor for each index are the status, active_shards, and unassigned_shards. The status should be green, the number of active_shards should reflect the number_of_shards and number_of_replicas, and unassigned_shards should be zero.
-
-If the status is red, then part of the index is missing or has become corrupt. You can verify this if the *active_shards* setting is less than *number_of_shards* - (*number_of_replicas* + 1) and unassigned_shards is non-zero. Note that a status of yellow indicates that an index is in a transitional state, either as the result of adding more replicas or shards being relocated. The status should switch to green when the transition has completed.
-
-If it stays yellow for an extended period or changes to red, you should look to see whether any significant I/O events (such as a disk or network failure) have occurred at the operating system level.
-
-The \_nodes/stats API emits extensive information about each node in the cluster:
-
-`GET _nodes/stats`
-
-The output generated includes details about how indexes are stored on each node (including the sizes and
-numbers of documents), time spent performing indexing, querying, searching, merging, caching,
-operating system and process information, statistics about the JVM (including garbage collection
-performance), and thread pools. For more information, see [Monitoring Individual Nodes][Monitoring Individual Nodes].
-
-If a significant proportion of Elasticsearch requests are failing with *EsRejectedExecutionException*
-error messages, then Elasticsearch is failing to keep up with the work being sent its way. In this
-situation, you need to identify the bottleneck that is causing Elasticsearch to fall behind. Consider the
-following items:
-
-* If the bottleneck is due to a resource constraint, such as insufficient memory allocated to the JVM
- causing an excessive number of garbage collections, then consider allocating additional resources (in
- this case, configure the JVM to use more memory, up to 50% of the available storage on the node – see
- [Memory requirements][Memory requirements]).
-* If the cluster is showing large I/O wait times and the merge statistics gathered for an index by using
- the \_node/stats API contain large values then the index is write-heavy. Revisit the points raised in [Optimizing resources for indexing operations](./data-ingestion-performance.md#optimizing-resources-for-indexing-operations) to
- tune indexing performance.
-* Throttle client applications that are performing data ingestion operations and determine the effect
- that this has on performance. If this approach shows significant improvement, then consider either
- retaining the throttle, or scaling out by spreading the load for write-heavy indexes across more nodes.
- For more information, see [Tuning data ingestion performance for Elasticsearch on Azure][Tuning data ingestion performance for Elasticsearch on Azure].
-* If the searching statistics for an index indicate that queries are taking a long time then consider how
- the queries are optimized. Note that you can use the *query_time_in_millis* and *query_total* values reported by the search statistics to calculate a rough guide to query efficiency; the equation *query_time_in_millis* / *query_total* will give you an average time for each query.
-
-### Tools for monitoring Elasticsearch
-A variety of tools are available for performing everyday monitoring of Elasticsearch in production. These
-tools typically use the underlying Elasticsearch APIs to gather information and present the details in a
-manner that is easier to observe than the raw data. Common examples include [Elasticsearch-Head][Elasticsearch-Head],
-[Bigdesk][Bigdesk], [Kopf][Kopf], and [Marvel][Marvel].
-
-Elasticsearch-Head, Bigdesk, and Kopf run as plugins for the Elasticsearch software. More recent versions
-of Marvel can run independently, but require [Kibana][Kibana] to provide a data capture and hosting
-environment. The advantage of using Marvel with Kibana is that you can implement monitoring in a separate
-environment from the Elasticsearch cluster, enabling you to explore problems with Elasticsearch that
-might not be possible if the monitoring tools run as part of the Elasticsearch software. For example, if
-Elasticsearch repeatedly fails or is running very slowly, tools that run as Elasticsearch plugins will
-also be affected, making monitoring and diagnosis more difficult.
-
-At the operating system level, you can use tools such as the Log Analytics feature of [Azure Operations Management Suite][Azure Operations Management Suite] or [Azure Diagnostics with the Azure Portal][Azure Diagnostics with the Azure Portal] to capture performance data for VMs
-hosting Elasticsearch nodes. Another approach is to use [Logstash][Logstash] to capture performance and log data,
-store this information in a separate Elasticsearch cluster (don't use the same cluster that you are using
- or your application), and then use Kibana to visualize the data. For more information, see [Microsoft Azure Diagnostics with ELK][Microsoft Azure Diagnostics with ELK].
-
-### Tools for testing Elasticsearch performance
-Other tools are available if you are benchmarking Elasticsearch or subjecting a cluster to performance
-testing. These tools are intended to be used in a development or test environment rather than production.
-A frequently-used example is [Apache JMeter][Apache JMeter].
-
-JMeter was used to perform benchmarking and other load tests described in documents related to this
-guidance. [Creating a performance testing environment for Elasticsearch on Azure][Creating a performance testing environment for Elasticsearch on Azure] describes in detail how JMeter was configured and used.
-
-[Running Elasticsearch on Azure]: index.md
-[Tuning Data Ingestion Performance for Elasticsearch on Azure]: data-ingestion-performance.md
-[Creating a Performance Testing Environment for Elasticsearch on Azure]: performance-testing-environment.md
-[Implementing a JMeter Test Plan for Elasticsearch]: jmeter-test-plan.md
-[Deploying a JMeter JUnit Sampler for Testing Elasticsearch Performance]: jmeter-junit-sampler.md
-[Tuning Data Aggregation and Query Performance for Elasticsearch on Azure]: data-aggregation-and-query-performance.md
-[Configuring Resilience and Recovery on Elasticsearch on Azure]: resilience-and-recovery.md
-[Running the Automated Elasticsearch Resiliency Tests]: resilience-and-recovery
-
-[Apache JMeter]: http://jmeter.apache.org/
-[Apache Lucene]: https://lucene.apache.org/
-[Azure Disk Encryption for Windows and Linux IaaS VMs Preview]: /azure/azure-security-disk-encryption/
-[Azure Load Balancer]: /azure/load-balancer/load-balancer-overview/
-[ExpressRoute]: /azure/expressroute/expressroute-introduction/
-[internal load balancer]: /azure/load-balancer/load-balancer-internal-overview/
-[Sizes for Virtual Machines]: /azure/virtual-machines/virtual-machines-linux-sizes/
-
-[Memory Requirements]: #memory-requirements
-[Network Requirements]: #network-requirements
-[Node Discovery]: #node-discovery
-[Query Tuning]: #query-tuning
-
-[elasticsearch-scripts]: https://github.com/mspnp/elasticsearch/tree/master/scripts/ps
-[A Highly Available Cloud Storage Service with Strong Consistency]: http://blogs.msdn.com/b/windowsazurestorage/archive/2011/11/20/windows-azure-storage-a-highly-available-cloud-storage-service-with-strong-consistency.aspx
-[Azure Cloud Plugin]: https://www.elastic.co/blog/azure-cloud-plugin-for-elasticsearch
-[Azure Diagnostics with the Azure Portal]: https://azure.microsoft.com/blog/windows-azure-virtual-machine-monitoring-with-wad-extension/
-[Azure Operations Management Suite]: https://www.microsoft.com/server-cloud/operations-management-suite/overview.aspx
-[Azure Quickstart Templates]: https://azure.microsoft.com/documentation/templates/
-[Bigdesk]: http://bigdesk.org/
-[cat APIs]: https://www.elastic.co/guide/en/elasticsearch/reference/1.7/cat.html
-[configure the translog]: https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-translog.html
-[custom routing]: https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-routing-field.html
-[Elasticsearch]: https://www.elastic.co/products/elasticsearch
-[Elasticsearch-Head]: https://mobz.github.io/elasticsearch-head/
-[Elasticsearch.Net & NEST]: http://nest.azurewebsites.net/
-[elasticsearch-resilience-recovery]: resilience-and-recovery.md
-[Elasticsearch Snapshot and Restore module]: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html
-[Faking Index per User with Aliases]: https://www.elastic.co/guide/en/elasticsearch/guide/current/faking-it.html
-[field data circuit breaker]: https://www.elastic.co/guide/en/elasticsearch/reference/current/circuit-breaker.html#fielddata-circuit-breaker
-[Force Merge]: https://www.elastic.co/guide/en/elasticsearch/reference/2.1/indices-forcemerge.html
-[gossiping]: https://en.wikipedia.org/wiki/Gossip_protocol
-[Kibana]: https://www.elastic.co/downloads/kibana
-[Kopf]: https://github.com/lmenezes/elasticsearch-kopf
-[Logstash]: https://www.elastic.co/products/logstash
-[Mapping Explosion]: https://www.elastic.co/blog/found-crash-elasticsearch#mapping-explosion
-[Marvel]: https://www.elastic.co/products/marvel
-[Microsoft Azure Diagnostics with ELK]: http://aka.ms/AzureDiagnosticsElk
-[Monitoring Individual Nodes]: https://www.elastic.co/guide/en/elasticsearch/guide/current/_monitoring_individual_nodes.html#_monitoring_individual_nodes
-[nginx]: http://nginx.org/en/
-[Node Client API]: https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/client.html
-[Optimize]: https://www.elastic.co/guide/en/elasticsearch/reference/1.7/indices-optimize.html
-[PubNub Changes Plugin]: http://www.pubnub.com/blog/quick-start-realtime-geo-replication-for-elasticsearch/
-[request circuit breaker]: https://www.elastic.co/guide/en/elasticsearch/reference/current/circuit-breaker.html#request-circuit-breaker
-[Search Guard]: https://github.com/floragunncom/search-guard
-[Shield]: https://www.elastic.co/products/shield
-[Transport Client API]: https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/transport-client.html
-[tribe nodes]: https://www.elastic.co/blog/tribe-node
-[vmss]: https://azure.microsoft.com/documentation/services/virtual-machine-scale-sets/
-[Watcher]: https://www.elastic.co/products/watcher
-[Zen]: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery-zen.html
diff --git a/docs/elasticsearch/jmeter-junit-sampler.md b/docs/elasticsearch/jmeter-junit-sampler.md
deleted file mode 100644
index 097849f3724..00000000000
--- a/docs/elasticsearch/jmeter-junit-sampler.md
+++ /dev/null
@@ -1,220 +0,0 @@
----
-title: Deploy a JMeter JUnit sampler to test Elasticsearch performance
-description: >-
- How to use a JUnit sampler to generate and upload data to an Elasticsearch
- cluster.
-author: dragon119
-ms.date: 09/22/2016
-ms.custom: elasticsearch
-
-pnp.series.title: Elasticsearch on Azure
-pnp.series.prev: implement-jmeter-test-plan
-pnp.series.next: automated-resilience-tests
----
-# Deploy a JMeter JUnit sampler for testing performance
-[!INCLUDE [header](../_includes/header.md)]
-
-This document describes how to create and use a JUnit sampler that can generate and upload data to an Elasticsearch cluster as part of a JMeter test plan. This approach provides a highly flexible approach to load testing that can generate large quantities of test data without depending on external data files.
-
-> [!NOTE]
-> The load tests used to assess the performance of data ingestion described in [Tuning data ingestion performance for Elasticsearch](./data-ingestion-performance.md) were constructed using this approach. The details of the JUnit code are described in that document.
->
->
-
-For testing data ingestion performance, the JUnit code was developing using Eclipse (Mars), and dependencies were resolved using Maven. The following procedures describe the step by step process for installing Eclipse, configuring Maven, creating a JUnit test, and deploying this test as a JUnit Request sampler in a JMeter test.
-
-> [!NOTE]
-> For detailed information on the structure and configuration of the test environment, see [Creating a performance testing environment for Elasticsearch on Azure][Creating a performance testing environment for Elasticsearch on Azure].
->
->
-
-## Installing prerequisites
-You will need the [Java Runtime Environment](http://www.java.com/en/download/ie_manual.jsp) on your development machine.
-You will also need to install the [Eclipse IDE for Java Developers](https://www.eclipse.org/downloads/index.php?show_instructions=TRUE).
-
-> [!NOTE]
-> If you are using the JMeter master VM described in [Creating a performance testing environment for Elasticsearch on Azure][Creating a performance testing environment for Elasticsearch on Azure] as your development environment, download the Windows 32 Bit version of the Eclipse Installer.
->
->
-
-## Creating a JUnit test project for load testing Elasticsearch
-Start the Eclipse IDE if it is not already running, and then close the **Welcome** page. On the **File** menu,
-click, and then click **Java Project**.
-
-![Eclipse menu](./images/jmeter-deploy7.png)
-
-In the **New Java Project** window, enter a project name, select **Use default JRE**, and then click **Finish**.
-
-![New Java Project dialog](./images/jmeter-deploy8.png)
-
-In the **Package Explorer** window, expand the node named after your project. Verify that it contains a
-folder named **src** and a reference to the JRE you specified.
-
-![Eclipse Package Explorer](./images/jmeter-deploy9.png)
-
-Right-click the **src** folder, click **New**, and then click **JUnit Test Case**.
-
-![New JUnit Test Case menu item](./images/jmeter-deploy10.png)
-
-In the **New JUnit Test Case** window, select **New Junit 4 test**, enter a name for the package (this can
-be the same as the name of the project, although by convention it should start with a lower case letter),
-a name for the test class, and select the options that generate the method stubs required for your test.
-Leave the **Class under test** box empty, and then click **Finish**.
-
-![New JUnit Test Case dialog](./images/jmeter-deploy11.png)
-
-If the following **New JUnit Test Case** dialog box appears, select the option to add the JUnit 4 library
-to the build path and then click **OK**.
-
-![New JUnit Test Case dialog](./images/jmeter-deploy12.png)
-
-Verify that the skeleton code for the JUnit test is generated and displayed in the Java editor window.
-
-![Java editor window](./images/jmeter-deploy13.png)
-
-In the **Package Explorer**, right-click the node for your project, click **Configure**, and then click
-**Convert to Maven Project**.
-
-> [!NOTE]
-> Using Maven enables you to more easily manage external dependencies (such as the
-> Elasticsearch Java client libraries) a project depends on.
->
->
-
-![Convert to Maven Project menu item](./images/jmeter-deploy14.png)
-
-In the **Create new POM** dialog box, in the **Packaging** drop-down list, select **jar**, and then click
-**Finish**.
-
-![Create new POM dialog](./images/jmeter-deploy15.png)
-
-The pane that appears below the project object model (POM) editor might display the warning "Build path specifies execution environment J2SE-1.5. There are no JREs installed in the workspace that are strictly compatible with this environment", depending on which version of Java is installed on your development machine. If you have a version of Java that is later than version 1.5 you can safely ignore this warning.
-
-![Eclipse warnings](./images/jmeter-deploy16.png)
-
-In the POM editor, expand **Properties** and then click **Create**.
-
-![POM editor](./images/jmeter-deploy17.png)
-
-In the **Add Property** dialog box, in the **Name** box type *es.version*, in the **Value** box type *1.7.2*,
-and then click **OK**. This is the version of the Elasticsearch Java client library to use (this version
-may be superseded in the future, and defining the version as a POM property and referencing this property elsewhere within the project enables the version to be changed quickly.)
-
-![Add Property dialog](./images/jmeter-deploy18.png)
-
-Click the **Dependencies** tab at the base of the POM editor, and then click **Add** next to the **Dependencies**
-list.
-
-![Dependencies tab](./images/jmeter-deploy19.png)
-
-In the **Select Dependency** dialog box, in the **Group Id** box type *org.elasticsearch*, in the **Artifact Id**
-box type *elasticsearch*, in the **Version** box type *\${es.version}*, and then click **OK**. Information
-about the Java Elasticsearch client library is held in the online Maven Central repository, and this
-configuration will automatically download the library and its dependencies when the project is built.
-
-![Select Dependency dialog](./images/jmeter-deploy20.png)
-
-On **File** menu, click **Save All**. This action will save and build the project, downloading the
-dependencies specified by Maven. Verify that the Maven Dependencies folder appears in Package Explorer.
-Expand this folder to view the jar files downloaded to support the Elasticsearch Java client library.
-
-![Package Explorer](./images/jmeter-deploy21.png)
-
-## Importing an existing JUnit test project into Eclipse
-This procedure assumes that you have downloaded a Maven project that was previously created by using
-Eclipse.
-
-Start the Eclipse IDE. On the **File** menu, click **Import**.
-
-![Import menu item](./images/jmeter-deploy22.png)
-
-In the **Select** window, expand the **Maven** folder, click **Existing Maven Projects**, and then click **Next**.
-
-![Import menu](./images/jmeter-deploy23.png)
-
-In the **Maven Projects** window, specify the folder holding the project (the folder containing the
-pom.xml file), click **Select All**, and then click **Finish**.
-
-![Import Maven Projects dialog](./images/jmeter-deploy24.png)
-
-In the **Package Explorer** window, expand the node corresponding to your project. Verify that the project
-contains a folder named **src**. This folder contains the source code for the JUnit test. The project can
-be compiled and deployed following the instructions below.
-
-![Package explorer](./images/jmeter-deploy25.png)
-
-## Deploying a JUnit test to JMeter
-This procedure assumes you have created a project named LoadTest containing a JUnit Test class named
-`BulkLoadTest.java` that accepts configuration parameters passed in as a single string to a
-constructor (this is the mechanism that JMeter expects).
-
-In the Eclipse IDE, in **Package Explorer**, right-click the project node, and then click **Export**.
-
-![Export menu item](./images/jmeter-deploy26.png)
-
-In the **Export Wizard**, on the **Select** page, expand the **Java** node, click **JAR file**, and then click
-**Next**.
-
-![Export dialog](./images/jmeter-deploy27.png)
-
-On the **JAR File Specification** page, in the **Select the resources to export** box, expand the project
-deselect **.project**, and deselect **pom.xml**. In the **JAR file** box, provide a file name and location
-for the JAR (it should be given the .jar file extension), and then click **Finish**.
-
-![JAR Export dialog](./images/jmeter-deploy28.png)
-
-Using Windows Explorer, copy the JAR file you have just created to the JMeter master JVM and save it
-in the apache-jmeter-2.13\\lib\\junit folder underneath the folder where you have installed JMeter
-(see the procedure "Creating the JMeter master virtual machine" in [Creating a performance testing environment for Elasticsearch on Azure](./performance-testing-environment.md) for more information.)
-
-Return to Eclipse, expand the **Package Explorer** window and make a note of all the JAR files and their
-locations listed in the Maven Dependencies folder for the project. Note that the files displayed
-in the following image might vary, depending on which version of Elasticsearch you are using:
-
-![Package Explorer](./images/jmeter-deploy29.png)
-
-Using Windows Explorer, copy each JAR file referenced in the Maven Dependencies folder to the
-apache-jmeter-2.13\\lib\\junit folder on the JMeter master VM.
-
-If the lib\\junit folder already contains older versions of these JAR files then remove them.
-If you leave them in place then the JUnit test might not work as references could be resolved to the
-wrong JARs.
-
-On the JMeter master VM, stop JMeter if is currently running. Start JMeter. In JMeter, right-click
-**Test Plan**, click **Add**, click **Threads (Users)**, and then click **Thread Group**.
-
-![Thread Group menu item](./images/jmeter-deploy30.png)
-
-Under the **Test Plan** node, right-click **Thread Group**, click **Add**, click **Sampler**, and then click
-**JUnit Request**.
-
-![JUnit Request menu item](./images/jmeter-deploy31.png)
-
-On the **JUnit Request** page, select **Search for JUnit4 annotations (instead of JUnit 3)**. In the
-**Classname** drop-down list, select your JUnit load test class (it will be listed in the form
-*<package>.<class>*), in the **Test Method** drop-down list select the JUnit test method
-(this is the method that actually performs the work associated with the test and should have been marked
-with the *@test* annotation in the Eclipse project), and enter any values to be passed to the constructor
-in the **Constructor String Label** box. The details shown in the following image are just examples;
-your **Classname**, **Test Method*, and **Constructor String Label** will probably differ from those shown.
-
-![JUnit Request UI](./images/jmeter-deploy32.png)
-
-If your class does not appear in the **Classname** drop-down list, it probably means that the JAR was
-not exported properly or has not been placed in the lib\\junit folder, or some of the dependent JARs
-are missing from the lib\\junit folder. If this occurs, export the project from Eclipse again and
-ensure that you have selected the **src** resource, copy the JAR to the lib\\junit folder, and then
-verify that you have copied all of the dependent JARs listed by Maven to the lib folder.
-
-Close JMeter. There is no need to save the test plan. Copy the JAR file containing the JUnit test class
-to the /home/<username>/apache-jmeter-2.13/lib/junit folder on each of the JMeter subordinate
-VMs (*<username>* is the name of the administrative user you specified when you created the VM,
-see the procedure "Creating the JMeter subordinate virtual machines" in [Creating a performance testing environment for Elasticsearch on Azure](./performance-testing-environment.md) for more information.)
-
-Copy the dependent JAR files required by the JUnit test class to the
-/home/<username>/apache-jmeter-2.13/lib/junit folder on each of the JMeter subordinate VMs.
-Make sure to remove any older versions of JAR files from this folder first.
-
-You can use the `pscp` utility to copy files from a Windows computer to Linux.
-
-[Creating a Performance Testing Environment for Elasticsearch on Azure]: ./performance-testing-environment.md
diff --git a/docs/elasticsearch/jmeter-test-plan.md b/docs/elasticsearch/jmeter-test-plan.md
deleted file mode 100644
index 5e1f8e747ef..00000000000
--- a/docs/elasticsearch/jmeter-test-plan.md
+++ /dev/null
@@ -1,151 +0,0 @@
----
-title: Implement a JMeter test plan for Elasticsearch
-description: How to run performance tests for Elasticsearch with JMeter.
-author: dragon119
-ms.date: 09/22/2016
-ms.custom: elasticsearch
-
-pnp.series.title: Elasticsearch on Azure
-pnp.series.prev: performance-testing-environment
-pnp.series.next: deploy-jmeter-junit-sampler
----
-# Implement a JMeter test plan
-[!INCLUDE [header](../_includes/header.md)]
-
-The performance tests conducted against Elasticsearch were implemented using JMeter test plans
-together with Java code incorporated as a JUnit test for performing tasks such as uploading data into
-the cluster. The test plans and JUnit code are described in [Tuning data ingestion performance for Elasticsearch on Azure][Tuning data ingestion performance for Elasticsearch on Azure], and
-[Tuning data aggregation and query performance for Elasticsearch on Azure][Tuning data aggregation and query performance for Elasticsearch on Azure].
-
-The purpose of this document is to summarize the key experience gained from constructing and running
-these test plans. The [JMeter Best Practices](http://jmeter.apache.org/usermanual/best-practices.html)
-page on the Apache JMeter website contains more generalized advice on using JMeter effectively.
-
-## Implementing a JMeter test plan
-The following list summarizes the items you should consider when creating a JMeter test plan:
-
-* Create a separate thread group for each test you wish to perform. A test can contain multiple steps,
- including logic controllers, timers, pre- and post-processors, samplers, and listeners.
-* Avoid creating too many threads in a thread group. An excessive number of threads will cause
- JMeter to fail with "Out of memory" exceptions. It is better to add more JMeter subordinate servers
- each running a smaller number of threads than attempt to run a large number of threads on a single
- JMeter server.
-
-![Thread group UI](./images/jmeter-testing1.png)
-
-* To assess the performance of the cluster, incorporate the
- [Perfmon Metrics Collector](http://jmeter-plugins.org/wiki/PerfMon/) plugin into the test plan. This is a JMeter listener that is available as one of the standard JMeter plugins. Save the raw performance data to a set of files in comma-separated values (CSV) format and process them when the test is complete. This is more efficient and imposes less strain on JMeter than attempting to process the data as it is captured.
-
-![PerfMon Metrics Collector UI](./images/jmeter-testing2.png)
-
-You can use a tool such Excel to import the data and generate a range of graphs for analytical purposes.
-
-Consider capturing the following information:
-
-* CPU utilization for every node in the Elasticsearch cluster.
-* The number of bytes read per second from disk for every node.
-* If possible, the percentage of CPU time spent waiting for I/O to be performed on each node.
- This is not always possible for Windows VMs, but for Linux you can create a custom metric (an EXEC metric)
- that runs the following shell command to invoke *vmstat* on a node:
-
-```Shell
-sh:-c:vmstat 1 5 | awk 'BEGIN { line=0;total=0;}{line=line+1;if(line>1){total=total+\$16;}}END{print total/4}'
-```
-
-Field 16 in the output from *vmstat* contains the CPU time spent waiting for I/O. For more information
-about how this statement works, see the [vmstat command](http://linuxcommand.org/man_pages/vmstat8.html).
-
-* The number of bytes sent and received across the network to each node.
-* Use separate Aggregate Report listeners to record the performance and frequency of successful and
- failed operations. Capture success and failure data to different files.
-
-![Aggregate Report UI](./images/jmeter-testing3.png)
-
-* Keep each JMeter test case as simple as possible to enable you to directly correlate performance
- with specific test actions. For test cases that require complex logic, consider encapsulating
- this logic in a JUnit test and use the JUnit Request sampler in JMeter to run the test.
-* Use the HTTP Request sampler to perform HTTP operations, such as GET, POST, PUT, or DELETE.
- For example, you can run Elasticsearch searches by using a POST query and providing the query
- details in the *Body Data* box:
-
-![HTTP Request UI](./images/jmeter-testing4.png)
-
-* For ease of repeatability and reuse, parameterize test JMeter test plans. You can then use
- scripting to automate the running of test plans.
-
-## Implementing a JUnit test
-You can incorporate complex code into a JMeter test plan by creating one or more JUnit tests.
-You can write a JUnit test by using a Java integrated development environment (IDE) such as Eclipse. [Deploying a JMeter JUnit sampler for testing Elasticsearch performance][Deploying a JMeter JUnit sampler for testing Elasticsearch performance]
-provides information on how to set up an appropriate development environment.
-
-The following list summarizes some best practices you should follow when writing the code for a
-JUnit test:
-
-* Use the test class constructor to pass initialization parameters into the test. JMeter can use a
- constructor that takes a single string argument. In the constructor, parse this argument into its
- individual elements, as shown by the following code example:
-
-```Java
-private String hostName = "";
-private String indexName = "";
-private String typeName = "";
-private int port = 0;
-private String clusterName = "";
-private int itemsPerBatch = 0;
-
-/\* JUnit test class constructor \*/
-public ElasticsearchLoadTest2(String params) {
- /* params is a string containing a set of comma separated values for:
- hostName
- indexName
- typeName
- port
- clustername
- itemsPerBatch
- */
-
- /* Parse the parameter string into an array of string items */
- String delims = "\[ \]\*,\[ \]\*"; // comma surrounded by zero or more spaces
- String\[\] items = params.split(delims);
-
- /* Note: Parameter validation code omitted */
-
- /* Use the parameters to populate variables used by the test */
- hostName = items[0];
- indexName = items[1];
- typeName = items[2];
- port = Integer.parseInt(items[3]);
- clusterName = items[4];
- itemsPerBatch = Integer.parseInt(items[5]);
-
- if(itemsPerBatch == 0)
- itemsPerBatch = 1000;
-}
-```
-
-* Avoid I/O operations or other time-consuming operations in the constructor or setup test class, because they execute each time the JUnit test runs. (The same JUnit test can run many thousands of times for each performance test executed from JMeter.)
-* Consider using one-time setup for expensive test case initialization.
-* If the test requires a large number of input parameters, store test configuration information
- in a separate configuration file and pass the location of this file into the constructor.
-* Avoid hard coding file paths in the load test code. These can cause failures due to differences
- between operating systems such as Windows and Linux.
-* Use assertions to indicate failures in JUnit test methods so that you can track them with
- JMeter and use them as a business metrics. If possible, pass back information concerning the cause
- of the failure, as shown in bold in the following code example:
-
-```Java
-@Test
-public void bulkInsertTest() throws IOException {
- ...
- BulkResponse bulkResponse = bulkRequest.execute().actionGet();
- assertFalse(
- bulkResponse.buildFailureMessage(), bulkResponse.hasFailures());
- ...
-}
-```
-
-
-[Running Elasticsearch on Azure]: index.md
-[Tuning Data Ingestion Performance for Elasticsearch on Azure]: data-ingestion-performance.md
-[Deploying a JMeter JUnit Sampler for Testing Elasticsearch Performance]: jmeter-junit-sampler.md
-[Tuning Data Aggregation and Query Performance for Elasticsearch on Azure]: data-aggregation-and-query-performance.md
diff --git a/docs/elasticsearch/performance-testing-environment.md b/docs/elasticsearch/performance-testing-environment.md
deleted file mode 100644
index 4f3e18a7690..00000000000
--- a/docs/elasticsearch/performance-testing-environment.md
+++ /dev/null
@@ -1,491 +0,0 @@
----
-title: Create a performance testing environment for Elasticsearch
-description: >-
- How to set up an environment for testing the performance of an Elasticsearch
- cluster.
-author: dragon119
-ms.date: 09/22/2016
-ms.custom: elasticsearch
-
-pnp.series.title: Elasticsearch on Azure
-pnp.series.prev: resilience-and-recovery
-pnp.series.next: implement-jmeter-test-plan
----
-# Create a performance testing environment
-[!INCLUDE [header](../_includes/header.md)]
-
-This document describes how to set up an environment for testing the performance of an Elasticsearch
-cluster. This configuration was used to test the performance of data ingestion and query workloads,
-as described in [Tuning data ingestion performance for Elasticsearch on Azure][Tuning data ingestion performance for Elasticsearch on Azure].
-
-The performance testing process used [Apache JMeter](http://jmeter.apache.org/), with the
-[standard set](http://jmeter-plugins.org/wiki/StandardSet/) of plugins installed in a master/subordinate
-configuration using a set of dedicated VMs (not part of the Elasticsearch cluster) specifically configured
-for the purpose.
-
-The [PerfMon Server Agent](http://jmeter-plugins.org/wiki/PerfMonAgent/) was installed on each
-Elasticsearch node. The following sections provide instructions for recreating the test environment to
-enable you to conduct your own performance testing with JMeter. These instructions assume that you have
-already created an Elasticsearch cluster with nodes connected using an Azure virtual network.
-
-Note that the test environment also runs as a set of Azure VMs managed by using a single Azure
-resource group.
-
-[Marvel](https://www.elastic.co/products/marvel) was also installed and configured to enable the
-internal aspects of the Elasticsearch cluster to be monitored and analyzed more easily.
-If the JMeter statistics showed a peak or trough in performance, information available through Marvel
-can be invaluable to help determine the cause of the fluctuations.
-
-The following image shows the structure of the entire system.
-
-![Elasticsearch architecture](./images/performance-structure.png)
-
-Note the following points:
-
-* The JMeter master VM runs Windows Server to provide the GUI environment for the JMeter console.
- The JMeter Master VM provides the GUI (the *jmeter* application) to enable a tester to create tests,
- run tests, and visualize the results. This VM coordinates with the JMeter server VMs that actually
- send the requests that constitute the tests.
-* The JMeter subordinate VMs run Ubuntu Server (Linux), there is no GUI requirement for these VMs.
- The JMeter server VMs run the JMeter server software (the *jmeter-server* application) to send
- requests to the Elasticsearch cluster.
-* Dedicated client nodes were not used, although dedicated master nodes were.
-* The number of data nodes in the cluster can vary, depending on the scenario being tested.
-* All nodes in the Elasticsearch cluster run Marvel to observe performance at runtime, and the
- JMeter Server Agent to collect monitoring data for later analysis.
-* When testing Elasticsearch 2.0.0 and later, one of the data nodes also runs Kibana. This is
- required by the version of Marvel that runs on Elasticsearch 2.0.0 and later.
-
-## Creating an Azure resource group for the virtual machines
-The JMeter master needs to be able to connect directly to each of the nodes in the Elasticsearch cluster
-to gather performance data. If the JMeter VNet is distinct from the Elasticsearch cluster VNet, then this
-entails configuring each Elasticsearch node with a public IP address. If this is a problem with your
-Elasticsearch configuration, then consider implementing the JMeter VMs in the same VNet as the
-Elasticsearch cluster by using the same resource group, in which case you can omit this first procedure.
-
-First, [create a resource group](/azure/azure-resource-manager/resource-group-template-deploy-portal/).
-This document assumes that your resource group is named *JMeterPerformanceTest*. If you wish to run the
-JMeter VMs in the same VNet as the Elasticsearch cluster, use the same resource group as that cluster
-instead of creating a new one.
-
-## Creating the JMeter master virtual machine
-Next [create a Windows VM](/azure/virtual-machines/virtual-machines-windows-hero-tutorial/?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) using the
-*Windows Server 2008 R2 SP1* image. We recommend selecting a VM size with sufficient cores and memory
-to run the performance tests. Ideally this will be a machine with at least 2 cores and 3.5GB of RAM
-(A2 Standard or bigger).
-
-
-
-We recommend that you disable the diagnostics. When creating the VM in the portal, this is done on the
-*Settings* blade in the *Monitoring* section under *Diagnostics*. Leave the other settings at their
-default values.
-
-Verify that the VM and all the associated resources have been created successfully by
-[examining the resource group](/azure/azure-resource-manager/resource-group-portal/#manage-resource-groups) in the portal.
-The resources listed should consist of a VM, a network security group, and a public IP address all with
-the same name, and network interface and storage account with names based on that of the VM.
-
-## Creating the JMeter subordinate virtual machines
-Now [create a Linux VM](/azure/virtual-machines/virtual-machines-linux-quick-create-portal/?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) using the
-*Ubuntu Server 14.04 LTS* image. As with the JMeter master VM, select a VM size with sufficient cores
-and memory to run the performance tests. Ideally this will be a machine with at least 2 cores, and at
-least 3.5GB of RAM (Standard A2 or bigger).
-
-Again, we recommend that you disable the diagnostics.
-
-You can create as many subordinate VMs as you wish.
-
-## Installing JMeter server on the JMeter subordinate VMs
-The JMeter subordinate VMs are running Linux and by default you cannot connect to them by opening a
-remote desktop connection (RDP). Instead, you can
-[use PuTTY to open a command line window](/azure/virtual-machines/virtual-machines-linux-mac-create-ssh-keys/?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) on each VM.
-
-Once you've connected to one of the subordinate VMs, we'll use Bash to setup JMeter.
-
-First, install the Java Runtime Environment required to run JMeter.
-
-```bash
-sudo add-apt-repository ppa:webupd8team/java
-sudo apt-get update
-sudo apt-get install oracle-java8-installer
-```
-
-Now, download the JMeter software packaged as a zip file.
-
-```bash
-wget http://apache.mirror.anlx.net/jmeter/binaries/apache-jmeter-2.13.zip
-```
-
-Install the unzip command, then use it to expand the JMeter software. The software is copied to a folder
-named **apache-jmeter-2.13**.
-
-```bash
-sudo apt-get install unzip
-unzip apache-jmeter-2.13.zip
-```
-
-Change to the *bin* directory holding the JMeter executables, and make the *jmeter-server* and
-*jmeter* programs executable.
-
-```bash
-cd apache-jmeter-2.13/bin
-chmod u+x jmeter-server
-chmod u+x jmeter
-```
-
-Now, we need to edit the file `jmeter.properties` located in the current folder (use the text
-editor you're most familiar with, such as *vi* or *vim*). Locate the following lines:
-
-```yaml
-...
-client.rmi.localport=0
-...
-server.rmi.localport=4000
-...
-```
-
-Uncomment (remove the leading \## characters) and modify these lines as shown below, then save the
-file and close the editor:
-
-```yaml
-...
-client.rmi.localport=4441
-...
-server.rmi.localport=4440
-```
-
-Now, run the following commands to open port 4441 to incoming TCP traffic (this is the port you have just configured *jmeter-server* to listen on):
-
-```bash
-sudo iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 4441 -j ACCEPT
-```
-
-Download the zip file containing the standard collection of plugins for JMeter (these plugins provide
-performance monitoring counters) and then unzip the file to the **apache-jmeter-2.13** folder. Unzipping the file in this location places
-the plugins in the correct folder.
-
-If you are prompted to replace the LICENSE file, type A (for all):
-
-```bash
-wget http://jmeter-plugins.org/downloads/file/JMeterPlugins-Standard-1.3.0.zip
-unzip JMeterPlugins-Standard-1.3.0.zip
-```
-
-Use `nohup` to launch the JMeter server in the background. It should respond by displaying a
-process ID and a message indicating that it has created a remote object and is ready to start
-receiving commands. Run the following command in the ~/apache-jmeter-2.13/bin directory.
-
-```bash
-nohup jmeter-server &
-```
-
-> [!NOTE]
-> If the VM is shutdown then the JMeter server program is terminated. You will need to
-> connect to the VM and restart it again manually. Alternatively, you can configure the system to run
-> the *jmeter-server* command automatically on startup by adding the following commands to the
-> `/etc/rc.local` file (before the *exit 0* command):
->
->
-
-```bash
-sudo -u bash << eoc
-cd /home//apache-jmeter-2.13/bin
-nohup ./jmeter-server &
-eoc
-```
-
-Replace `` with your login name.
-
-You may find it useful to keep the terminal window open so that you can monitor the progress of the
-JMeter server while testing is in progress.
-
-You will need to repeat these steps for each JMeter subordinate VM.
-
-## Installing the JMeter Server Agent on the Elasticsearch nodes
-This procedure assumes that you have login access to the Elasticsearch nodes. If you have created the
-cluster using the Resource Manager template, you can connect to each node through the jump box VM, as illustrated
-in the Elasticsearch topology section of [Running Elasticsearch on Azure](./index.md). You can connect to the jump box using PuTTY as well.
-
-From there, you can use the *ssh* command to log in to each of the nodes in the Elasticsearch cluster.
-
-Log in to one of the Elasticsearch nodes as an administrator. At the Bash command prompt, enter the
-following commands to create a folder for holding the JMeter Server Agent and move to that folder:
-
-```bash
-mkdir server-agent
-cd server-agent
-```
-
-Run the following commands to install the *unzip* command (if it is not already installed),
-download the JMeter Server Agent software, and unzip it:
-
-```bash
-sudo apt-get install unzip
-wget http://jmeter-plugins.org/downloads/file/ServerAgent-2.2.1.zip
-unzip ServerAgent-2.2.1.zip
-```
-
-Run the following command to configure the firewall and enable TCP traffic to pass through
-port 4444 (this is the port used by the JMeter Server Agent):
-
-```bash
-sudo iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 4444 -j ACCEPT
-```
-
-Run the following command to start the JMeter Server Agent in the background:
-
-```bash
-nohup ./startAgent.sh &
-```
-
-The JMeter Server Agent should respond with messages indicating that it has started and is
-listening on port 4444. Press Enter to obtain a command prompt, and then run the following command.
-
-```bash
-telnet 4444
-```
-
-Replace `` with the name of your node. (You can find the name of your node by running the `hostname` command.) This command opens a telnet connection to port 4444 on your local machine. You can use this
-connection to verify that the JMeter Server Agent is running correctly.
-
-If the JMeter Server Agent is not running, you will receive the response
-
-`*telnet: Unable to connect to remote host: Connection refused*.`
-
-If the JMeter Server Agent is running and port 4444 has been configured correctly, you should see
-the following response:
-
-![JMeter Server Agent](./images/performance-telnet-server.png)
-
-> [!NOTE]
-> The telnet session does not provide any sort of prompt once it has connected.
->
->
-
-In the telnet session, type the following command:
-
-```
-test
-```
-
-If the JMeter Server Agent is configured and listening correctly, it should indicate that it
-received the command and respond with the message *Yep*.
-
-> [!NOTE]
-> You can type in other commands to obtain performance monitoring data. For example,
-> the command `metric-single:cpu:idle` will give you the current proportion of the time that the CPU
-> is idle (this is a snapshot). For a complete list of commands, visit the
-> [PerfMon Server Agent](http://jmeter-plugins.org/wiki/PerfMonAgent/) page. : Back to calling it he Perfmon Server Agent.>>
->
->
-
-In the telnet session, type the following command to quit the session and return to the Bash
-command prompt:
-
-```
-exit
-```
-
-> [!NOTE]
-> As with the JMeter subordinate VMs, if you log out, or if this machine is shutdown
-> and restarted then the JMeter Server Agent will need to be restarted manually by using the
-> `startAgent.sh` command. If you want the JMeter Server Agent to start automatically, add the
-> following command to the end of the `/etc/rc.local` file, before the *exit 0* command.
-> Replace `` with your login name:
->
->
-
-```bash
-sudo -u bash << eoc
-cd /home//server-agent
-nohup ./startAgent.sh &
-eoc
-```
-
-You can now either repeat this entire process for every other node in the Elasticsearch cluster, or
-you can use the `scp` command to copy the server-agent folder and contents to every
-other node and use the `ssh` command start the JMeter Server Agent as shown below. e
-Replace `` with your username, and `` with the name of the node where you
-wish to copy and run the software (you may be asked to provide your password as you run each command):
-
-```bash
-scp -r \~/server-agent @:\~
-ssh sudo iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 4444 -j ACCEPT
-ssh -n -f 'nohup \~/server-agent/startAgent.sh'
-```
-
-## Installing and configuring JMeter on the JMeter master VM
-In the Azure portal, click **Resource groups**. In the **Resource groups** blade, click the resource group containing the JMeter master and subordinate VMs. In the **Resource group** blade, click the **JMeter master VM**. In the virtual machine blade, on the toolbar, click **Connect**. Open the RDP file when prompted by the web
-browser. Windows creates a remote desktop connection to your VM. Enter the username and password for the VM when prompted.
-
-In the VM, using Internet Explorer, go to the [Download Java for Windows](http://www.java.com/en/download/ie_manual.jsp)
-page. Follow the instructions to download and run the Java installer.
-
-In the web browser, go to the [Download Apache JMeter](http://jmeter.apache.org/download_jmeter.cgi)
-page and download the zip containing the most recent binary. Save the zip in a convenient location on your VM.
-
-Go to the [Custom JMeter Plugins](http://jmeter-plugins.org/) site and download the Standard Set of plugins.
-Save the zip in the same folder as the JMeter download from the previous step.
-
-In Windows Explorer, go to the folder containing the apache-jmeter-*xxx* zip file, where *xxx* is the
-current version of JMeter. Extract the files into the current folder.
-
-Extract the files in the JMeterPlugins-Standard-*yyy*.zip file, where *yyy* is the current version of the plugins, into the apache-jmeter-*xxx* folder. This will add the plugins to the correct folder for
-JMeter. You can safely merge the lib folders, and overwrite the license and readme files if prompted.
-
-Go to the apache-jmeter-*xxx*/bin folder and edit the jmeter.properties file using Notepad. In the
-`jmeter.properties` file, find the section labelled *Remote hosts and RMI configuration*. In this
-section of the file, find the following line:
-
-```yaml
-remote_hosts=127.0.0.1
-```
-
-Change this line and replace the IP address 127.0.0.1 with a comma separated list of IP addresses or
-host names for each of the JMeter subordinate servers. For example:
-
-```yaml
-remote_hosts=JMeterSub1,JMeterSub2
-```
-
-Find the following line, then remove the `#` character at the start of this line, and modify the value
-of the client.rmi.localport settings from:
-
-```yaml
-#client.rmi.localport=0
-```
-
-to:
-
-```yaml
-client.rmi.localport=4440
-```
-
-Save the file and close Notepad.
-
-In the Windows toolbar, click **Start**, click **Administrative Tools**, and then click
-**Windows Firewall with Advanced Security**. In the Windows Firewall with Advanced Security window,
-in the left pane, right-click **Inbound Rules**, and then click **New Rule**.
-
-In the **New Inbound Rule Wizard**, on the **Rule Type** page, select **Port**, and then click **Next**. On
-the Protocols and Ports page, select **TCP**, select **Specific local ports**, in the text box type
-`4440-4444`, and then click **Next**. On the Action page, select **Allow the connection**, and then click **Next**. On the Profile page, leave all options checked and then click **Next**. On the Name page, in the **Name** text box type *JMeter*, and then click **Finish**. Close the Windows Firewall with Advanced Security window.
-
-In Windows Explorer, in the apache-jmeter-*xx*/bin folder, double-click the *jmeter* Windows batch
-file to start the GUI. The user interface should appear:
-
-![Apache JMeter UI](./images/performance-image17.png)
-
-In the menu bar, click **Run**, click **Remote Start**, and verify that the two JMeter subordinate
-machines are listed:
-
-![Remote Start menu item](./images/performance-image18.png)
-
-You are now ready to begin performance testing.
-
-## Installing and configuring Marvel
-The Elasticsearch Quickstart Template for Azure will install and configure the appropriate version of
-Marvel automatically if you set the MARVEL and KIBANA parameters to true ("yes") when building the cluster:
-
-![Elasticsearch quickstart parameters](./images/performance-image19.png)
-
-If you are adding Marvel to an existing cluster you need to perform the installation manually, and the process is different depending on whether you are using Elasticsearch version 1.7.x or 2.x, as described in the following procedures.
-
-### Installing Marvel with Elasticsearch 1.73 or earlier
-If you are using Elasticsearch 1.7.3 or earlier, perform the following steps *on every node* in the
-cluster:
-
-* Log in to the node and move to the Elasticsearch home directory. On Linux, the typical home directory
- is `/usr/share/elasticsearch`.
-* Run the following command to download and install the Marvel plugin for Elasticsearch:
-
-```bash
-sudo bin/plugin -i elasticsearch/marvel/latest
-```
-
-* Stop and restart Elasticsearch on the node:
-
-```bash
-sudo service elasticsearch restart
-```
-
-* To verify that Marvel was installed correctly, open a web browser and go to the
- URL `http://:9200/_plugin/marvel`. Replace `` with the name or IP address of
- any Elasticsearch server in the cluster. Verify that a page similar to that shown below appears:
-
-![Marvel UI](./images/performance-image20.png)
-
-### Installing Marvel with Elasticsearch 2.0.0 or later
-If you are using Elasticsearch 2.0.0 or later, perform the following tasks *on every node* in the cluster:
-
-Log in to the node and move to the Elasticsearch home directory (typically `/usr/share/elasticsearch`) Run
-the following commands to download and install the Marvel plugin for Elasticsearch:
-
-```bash
-sudo bin/plugin install license
-sudo bin/plugin install marvel-agent
-```
-
-Stop and restart Elasticsearch on the node:
-
-```bash
-sudo service elasticsearch restart
-```
-
-In the following procedure, replace `` with 4.2.2 if you are using Elasticsearch 2.0.0
-or Elasticsearch 2.0.1, or with 4.3.1 if you are using Elasticsearch 2.1.0 or later. Replace
-`` with 2.0.0 if you are using Elasticsearch 2.0.0 or Elasticsearch 2.0.1, or
-with 2.1.0 if you are using Elasticsearch 2.1.0 or later. Perform the following tasks *on one node*
-in the cluster:
-
-Log in to the node and download the appropriate build of Kibana for your version of Elasticsearch
-from the [Elasticsearch download web site](https://www.elastic.co/downloads/past-releases), then extract
-the package:
-
-```bash
-wget https://download.elastic.co/kibana/kibana/kibana--linux-x64.tar.gz
-tar xvzf kibana--linux-x64.tar.gz
-```
-
-Open port 5601 to accept incoming requests:
-
-```bash
-sudo iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 5601 -j ACCEPT
-```
-
-Move to the Kibana config folder (`kibana--linux-x64/config`), edit the `kibana.yml`
-file, and add the following line. Replace `` with the name or IP address of an Elasticsearch
-server in the cluster:
-
-```yaml
-elasticsearch.url: "http://:9200"
-```
-
-Move to the Kibana bin folder (`kibana--linux-x64/bin`), and run the following
-command to integrate the Marvel plugin into Kibana:
-
-```bash
-sudo ./kibana plugin --install elasticsearch/marvel/
-```
-
-Start Kibana:
-
-```bash
-sudo nohup ./kibana &
-```
-
-To verify the Marvel installation, open a web browser and go to the URL
-`http://:5601/app/marvel`. Replace `` with the name or IP address
-of the server running Kibana.
-
-Verify that a page similar to that shown below appears (the name of your cluster will likely
-vary from that shown in the image).
-
-![Marvel](./images/performance-image21.png)
-
-Click the link that corresponds to your cluster (elasticsearch210 in the image above). A page
-similar to that shown below should appear:
-
-![Elasticsearch cluster](./images/performance-image22.png)
-
-[Tuning Data Ingestion Performance for Elasticsearch on Azure]: ./data-ingestion-performance.md
diff --git a/docs/elasticsearch/resilience-and-recovery.md b/docs/elasticsearch/resilience-and-recovery.md
deleted file mode 100644
index e26e430e225..00000000000
--- a/docs/elasticsearch/resilience-and-recovery.md
+++ /dev/null
@@ -1,315 +0,0 @@
----
-title: Configure resilience and recovery on Elasticsearch on Azure
-description: Considerations related to resiliency and recovery for Elasticsearch.
-author: dragon119
-ms.date: 09/22/2016
-ms.custom: elasticsearch
-
-pnp.series.title: Elasticsearch on Azure
-pnp.series.prev: data-aggregation-and-query-performance
-pnp.series.next: performance-testing-environment
----
-# Configure resilience and recovery
-[!INCLUDE [header](../_includes/header.md)]
-
-A key feature of Elasticsearch is the support that it provides for resiliency in the event of node failures and/or network partition events. Replication is the most obvious way in which you can improve the resiliency of any cluster, enabling Elasticsearch to ensure that more than one copy of any data item is available on different nodes in case one node should become inaccessible. If a node becomes temporarily unavailable, other nodes containing replicas of data from the missing node can serve the missing data until the problem is resolved. In the event of a longer term issue, the missing node can be replaced with a new one, and Elasticsearch can restore the data to the new node from the replicas.
-
-Here we summarize the resiliency and recovery options available with Elasticsearch when hosted in Azure, and describe some important aspects of an Elasticsearch cluster that you should consider to minimize the chances of data loss and extended data recovery times.
-
-This article also illustrates some sample tests that were performed to show the effects of different types of failures on an Elasticsearch cluster, and how the system responds as it recovers.
-
-An Elasticsearch cluster uses replicas to maintain availability and improve read performance. Replicas should be stored on different VMs from the primary shards that they replicate. The intention is that if the VM hosting a data node fails or becomes unavailable, the system can continue functioning using the VMs holding the replicas.
-
-## Using dedicated master nodes
-One node in an Elasticsearch cluster is elected as the master node. The purpose of this node is to perform cluster management operations such as:
-
-* Detecting failed nodes and switching over to replicas.
-* Relocating shards to balance node workload.
-* Recovering shards when a node is brought back online.
-
-You should consider using dedicated master nodes in critical clusters, and ensure that there are 3 dedicated nodes whose only role is to be master. This configuration reduces the amount of resource intensive work that these nodes have to perform (they do not store data or handle queries) and helps to improve cluster stability. Only one of these nodes will be elected, but the others will contain a copy of the system state and can take over should the elected master fail.
-
-## Controlling high availability with Azure – update domains and fault domains
-Different VMs can share the same physical hardware. In an Azure datacenter, a single rack can host a number of VMs, and all of these VMs share a common power source and network switch. A single rack-level failure can therefore impact a number of VMs. Azure uses the concept of fault domains to try and spread this risk. A fault domain roughly corresponds to a group of VMs that share the same rack. To ensure that a rack-level failure does not crash a node and the nodes holding all of its replicas simultaneously, you should ensure that the VMs are distributed across fault domains.
-
-Similarly, VMs can be taken down by the [Azure Fabric Controller](https://azure.microsoft.com/documentation/videos/fabric-controller-internals-building-and-updating-high-availability-apps/) to perform planned maintenance and operating system upgrades. Azure allocates VMs to update domains. When a planned maintenance event occurs, only VMs in a single update domain are effected at any one time. VMs in other update domains are left running until the VMs in the update domain being updated are brought back online. Therefore, you also need to ensure that VMs hosting nodes and their replicas belong to different update domains wherever possible.
-
-> [!NOTE]
-> For more information about fault domains and update domains, see [Manage the availability of virtual machines](/azure/virtual-machines/virtual-machines-linux-manage-availability/?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json).
->
->
-
-You cannot explicitly allocate a VM to a specific update domain and fault domain. This allocation is controlled by Azure when VMs are created. However, you can specify that VMs should be created as part of an availability set. VMs in the same availability set will be spread across update domains and fault domains. If you create VMs manually, Azure creates each availability set with two fault domains and five update domains. VMs are allocated to these fault domains and update domains, cycling round as further VMs are provisioned, as follows:
-
-| VM | Fault domain | Update domain |
-| --- | --- | --- |
-| 1 |0 |0 |
-| 2 |1 |1 |
-| 3 |0 |2 |
-| 4 |1 |3 |
-| 5 |0 |4 |
-| 6 |1 |0 |
-| 7 |0 |1 |
-
-> [!IMPORTANT]
-> If you create VMs using the Azure Resource Manager, each availability set can be allocated up to 3 fault domains and 20 update domains. This is a compelling reason for using the Resource Manager.
->
->
-
-In general, place all VMs that serve the same purpose in the same availability set, but create different availability sets for VMs that perform different functions. With Elasticsearch this means that you should consider creating at least separate availability sets for:
-
-* VMs hosting data nodes.
-* VMs hosting client nodes (if you are using them).
-* VMs hosting master nodes.
-
-Additionally, you should ensure that each node in a cluster is aware of the update domain and fault domain it belongs to. This information can help to ensure that Elasticsearch does not create shards and their replicas in the same fault and update domains, minimizing the possibility of a shard and its replicas from being taken down at the same time. You can configure an Elasticsearch node to mirror the hardware distribution of the cluster by configuring [shard allocation awareness](https://www.elastic.co/guide/en/elasticsearch/reference/current/allocation-awareness.html#allocation-awareness). For example, you could define a pair of custom node attributes called *faultDomain* and *updateDomain* in the elasticsearch.yml file, as follows:
-
-```yaml
-node.faultDomain: \${FAULTDOMAIN}
-node.updateDomain: \${UPDATEDOMAIN}
-```
-
-In this case, the attributes are set using the values held in the *\${FAULTDOMAIN}* and *\${UPDATEDOMAIN}* environment variables when Elasticsearch is started. You also need to add the following entries to the Elasticsearch.yml file to indicate that *faultDomain* and *updateDomain* are allocation awareness attributes, and specify the sets of acceptable values for these attributes:
-
-```yaml
-cluster.routing.allocation.awareness.force.updateDomain.values: 0,1,2,3,4
-cluster.routing.allocation.awareness.force.faultDomain.values: 0,1
-cluster.routing.allocation.awareness.attributes: updateDomain, faultDomain
-```
-
-You can use shard allocation awareness in conjunction with [shard allocation filtering](https://www.elastic.co/guide/en/elasticsearch/reference/2.0/shard-allocation-filtering.html#shard-allocation-filtering) to specify explicitly which nodes can host shards for any given index.
-
-If you need to scale beyond the number of fault domains and update domains in an availability set, you can create VMs in additional availability sets. However, you need to understand that nodes in different availability sets can be taken down for maintenance simultaneously. Try to ensure that each shard and at least one of its replicas are contained within the same availability set.
-
-> [!NOTE]
-> There is currently a limit of 100 VMs per availability set. For more information, see [Azure subscription and service limits, quotas, and constraints](/azure/azure-subscription-service-limits/).
->
->
-
-### Backup and restore
-Using replicas does not provide complete protection from catastrophic failure (such as accidentally deleting the entire cluster). You should ensure that you back up the data in a cluster regularly, and that you have a tried and tested strategy for restoring the system from these backups.
-
-Use the Elasticsearch [snapshot and restore APIs](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html) : Elastic doesn't cap these.>> to backup and restore indexes. Snapshots can be saved to a shared filesystem. Alternatively, plugins are available that can write snapshots to the Hadoop distributed file system (HDFS) (the [HDFS plugin](https://github.com/elasticsearch/elasticsearch-hadoop/tree/master/repository-hdfs)) or to Azure storage (the [Azure plugin](https://github.com/elasticsearch/elasticsearch-cloud-azure#azure-repository)).
-
-Consider the following points when selecting the snapshot storage mechanism:
-
-* You can use [Azure File storage](https://azure.microsoft.com/services/storage/files/) to implement a shared filesystem that is accessible from all nodes.
-* Only use the HDFS plugin if you are running Elasticsearch in conjunction with Hadoop.
-* The HDFS plugin requires you to disable the Java Security Manager running inside the Elasticsearch instance of the Java virtual machine (JVM).
-* The HDFS plugin supports any HDFS-compatible file system provided that the correct Hadoop configuration is used with Elasticsearch.
-
-## Handling intermittent connectivity between nodes
-Intermittent network glitches, VM reboots after routine maintenance at the datacenter, and other similar events can cause nodes to become temporarily inaccessible. In these situations, where the event is likely to be short lived, the overhead of rebalancing the shards occurs twice in quick succession (once when the failure is detected and again when the node become visible to the master) can become a significant overhead that impacts performance. You can prevent temporary node inaccessibility from causing the master to rebalance the cluster by setting the *delayed\_timeout* property of an index, or for all indexes. The example below sets the delay to 5 minutes:
-
-```http
-PUT /_all/settings
-{
- "settings": {
- "index.unassigned.node_left.delayed_timeout": "5m"
- }
-}
-```
-
-For more information, see [Delaying allocation when a node leaves](https://www.elastic.co/guide/en/elasticsearch/reference/current/delayed-allocation.html).
-
-In a network that is prone to interruptions, you can also modify the parameters that configure a master to detect when another node is no longer accessible. These parameters are part of the [zen discovery](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery-zen.html#modules-discovery-zen) module provided with Elasticsearch, and you can set them in the Elasticsearch.yml file. For example, the *discovery.zen.fd.ping.retries* parameter specifies how many times a master node will attempt to ping another node in the cluster before deciding that it has failed. This parameter defaults to 3, but you can modify it as follows:
-
-```yaml
-discovery.zen.fd.ping_retries: 6
-```
-
-## Controlling recovery
-When connectivity to a node is restored after a failure, any shards on that node will need to be recovered to bring them up to date. By default, Elasticsearch recovers shards in the following order:
-
-* By reverse index creation date. Newer indexes are recovered before older indexes.
-* By reverse index name. Indexes that have names that are alphanumerically greater than others will be restored first.
-
-If some indexes are more critical than others, but do not match these criteria you can override the precedence of indexes by setting the *index.priority* property. Indexes with a higher value for this property will be recovered before indexes that have a lower value:
-
-```http
-PUT low_priority_index
-{
- "settings": {
- "index.priority": 1
- }
-}
-
-PUT high_priority_index
-{
- "settings": {
- "index.priority": 10
- }
-}
-```
-
-For more information, see [Index Recovery Prioritization](https://www.elastic.co/guide/en/elasticsearch/reference/2.0/recovery-prioritization.html#recovery-prioritization).
-
-You can monitor the recovery process for one or more indexes using the *\_recovery* API:
-
-```http
-GET /high_priority_index/_recovery?pretty=true
-```
-
-For more information, see [Indices Recovery](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-recovery.html#indices-recovery).
-
-> [!NOTE]
-> A cluster with shards that require recovery will have a status of *yellow* to indicate that not all shards are currently available. When all the shards are available, the cluster status should revert to *green*. A cluster with a status of *red* indicates that one or more shards are physically missing, it may be necessary to restore data from a backup.
->
->
-
-## Preventing split brain
-A split brain can occur if the connections between nodes fail. If a master node becomes unreachable to part of the cluster, an election will take place in the network segment that remains contactable and another node will become the master. In an ill-configured cluster, it is possible for each part of the cluster to have different masters resulting in data inconsistencies or corruption. This phenomenon is known as a *split brain*.
-
-You can reduce the chances of a split brain by configuring the *minimum\_master\_nodes* property of the discovery module, in the elasticsearch.yml file. This property specifies how many nodes must be available to enable the election of a master. The following example sets the value of this property to 2:
-
-```yaml
-discovery.zen.minimum_master_nodes: 2
-```
-
-This value should be set to the lowest majority of the number of nodes that are able to fulfil the master role. For example, if your cluster has 3 master nodes, *minimum\_master\_nodes* should be set to 2. If you have 5 master nodes, *minimum\_master\_nodes* should be set to 3. Ideally, you should have an odd number of master nodes.
-
-> [!NOTE]
-> It is possible for a split brain to occur if multiple master nodes in the same cluster are started simultaneously. While this occurrence is rare, you can prevent it by starting nodes serially with a short delay (5 seconds) between each one.
->
->
-
-## Handling rolling updates
-If you are performing a software upgrade to nodes yourself (such as migrating to a newer release or performing a patch), you may need to perform work on individual nodes that requires taking them offline while keeping the remainder of the cluster available. In this situation, consider implementing the following process.
-
-1. Ensure that shard reallocation is delayed sufficiently to prevent the elected master from rebalancing shards from a missing node across the remainder of the cluster. By default, shard reallocation is delayed for 1 minute, but you can increase the duration if a node is likely to be unavailable for a longer period. The following example increases the delay to 5 minutes:
-
- ```http
- PUT /_all/_settings
- {
- "settings": {
- "index.unassigned.node_left.delayed_timeout": "5m"
- }
- }
- ```
-
- > [!IMPORTANT]
- > You can also disable shard reallocation completely by setting the *cluster.routing.allocation.enable* of the cluster to *none*. However, you should avoid using this approach if new indexes are likely to be created while the node is offline as this can cause index allocation to fail resulting in a cluster with red status.
- >
- >
-2. Stop Elasticsearch on the node to be maintained. If Elasticsearch is running as a service, you may be able to halt the process in a controlled manner by using an operating system command. The following example shows how to halt the Elasticsearch service on a single node running on Ubuntu:
-
- ```bash
- service elasticsearch stop
- ```
-
- Alternatively, you can use the Shutdown API directly on the node:
-
- ```http
- POST /_cluster/nodes/_local/_shutdown
- ```
-3. Perform the necessary maintenance on the node
-4. Restart the node and wait for it to join the cluster.
-5. Re-enable shard allocation:
-
- ```http
- PUT /_cluster/settings
- {
- "transient": {
- "cluster.routing.allocation.enable": "all"
- }
- }
- ```
-
-> [!NOTE]
-> If you need to maintain more than one node, repeat steps 2–4 on each node before re-enabling shard allocation.
->
->
-
-If you can, stop indexing new data during this process. This will help to minimize recovery time when nodes are brought back online and rejoin the cluster.
-
-Beware of automated updates to items such as the JVM (ideally, disable automatic updates for these items), especially when running Elasticsearch under Windows. The Java update agent can download the most recent version of Java automatically, but may require Elasticsearch to be restarted for the update to take effect. This can result in uncoordinated temporary loss of nodes, depending on how the Java Update agent is configured. This can also result in different instances of Elasticsearch in the same cluster running different versions of the JVM which may cause compatibility issues.
-
-## Testing and analyzing Elasticsearch resilience and recovery
-This section describes a series of tests that were performed to evaluate the resilience and recovery of an Elasticsearch cluster containing three data nodes and three master nodes.
-
-The following scenarios were tested:
-
-* Node failure and restart with no data loss. A data node is stopped and restarted after 5 minutes. Elasticsearch was configured not to reallocate missing shards in this interval, so no additional I/O is incurred in moving shards around. When the node restarts, the recovery process brings the shards on that node back up to date.
-* Node failure with catastrophic data loss. A data node is stopped and the data that it holds is erased to simulate catastrophic disk failure. The node is then restarted (after 5 minutes), effectively acting as a replacement for the original node. The recovery process requires rebuilding the missing data for this node, and may involve relocating shards held on other nodes.
-* Node failure and restart with no data loss, but with shard reallocation. A data node is stopped and the shards that it holds are reallocated to other nodes. The node is then restarted and more reallocation occurs to rebalance the cluster.
-* Rolling updates. Each node in the cluster is stopped and restarted after a short interval to simulate machines being rebooted after a software update. Only one node is stopped at any one time. Shards are not reallocated while a node is down.
-
-Each scenario was subject to the same workload including a mixture of data ingestion tasks, aggregations, and filter queries while nodes were taken offline and recovered. The bulk insert operations in the workload each stored 1000 documents and were performed against one index while the aggregations and filter queries used a separate index containing several millions documents. This was to enable the performance of queries to be assessed separately from the bulk inserts. Each index contained five shards and one replica.
-
-The following sections summarize the results of these tests, noting any degradation in performance while a node is offline or being recovered, and any errors that were reported. The results are presented graphically, highlighting the points at which one or more nodes are missing and estimating the time taken for the system to fully recover and achieve a similar level of performance that was present prior to the nodes being taken offline.
-
-> [!NOTE]
-> The test harnesses used to perform these tests are available online. You can adapt and use these harnesses to verify the resilience and recoverability of your own cluster configurations. For more information, see [Running the automated Elasticsearch resiliency tests][Running the automated Elasticsearch resiliency tests].
->
->
-
-## Node failure and restart with no data loss: results
-
-
-The results of this test are shown in the file [ElasticsearchRecoveryScenario1.pdf](https://github.com/mspnp/elasticsearch/blob/master/figures/Elasticsearch/ElasticSearchRecoveryScenario1.pdf). The graphs show performance profile of the workload and physical resources for each node in the cluster. The initial part of the graphs show the system running normally for approximately 20 minutes, at which point node 0 is shut down for 5 minutes before being restarted. The statistics for a further 20 minutes are illustrated; the system takes approximately 10 minutes to recover and stabilize. This is illustrated by the transaction rates and response times for the different workloads.
-
-Note the following points:
-
-* During the test, no errors were reported. No data was lost, and all operations completed successfully.
-* The transaction rates for all three types of operation (bulk insert, aggregate query, and filter query) dropped and the average response times increased while node 0 was offline.
-* During the recovery period, the transaction rates and response times for the aggregate query and filter query operations were gradually restored. The performance for bulk insert recovered for a short while before diminishing. However, this is likely due to the volume of data causing the index used by the bulk insert to grow, and the transaction rates for this operation can be seen to slow down even before node 0 is taken offline.
-* The CPU utilization graph for node 0 shows reduced activity during the recovery phase, this is due to the increased disk and network activity caused by the recovery mechanism, the node has to catch up with any data it has missed while it is offline and update the shards that it contains.
-* The shards for the indexes are not distributed exactly equally across all nodes. There are two indexes containing 5 shards and 1 replica each, making a total of 20 shards. Two nodes will therefore contain 6 shards while the other two hold 7 each. This is evident in the CPU utilization graphs during the initial 20-minute period, node 0 is less busy than the other two. After recovery is complete, some switching seems to occur as node 2 appears to become the more lightly loaded node.
-
-## Node failure with catastrophic data loss: results
-
-
-The results of this test are depicted in the file [ElasticsearchRecoveryScenario2.pdf](https://github.com/mspnp/elasticsearch/blob/master/figures/Elasticsearch/ElasticSearchRecoveryScenario2.pdf). As with the first test, the initial part of the graphs shows the system running normally for approximately 20 minutes, at which point node 0 is shut down for 5 minutes. During this interval, the Elasticsearch data on this node is removed, simulating catastrophic data loss, before being restarted. Full recovery appears to take 12-15 minutes before the levels of performance seen before the test are restored.
-
-Note the following points:
-
-* During the test, no errors were reported. No data was lost, and all operations completed successfully.
-* The transaction rates for all three types of operation (bulk insert, aggregate query, and filter query) dropped and the average response times increased while node 0 was offline. At this point, the performance profile of the test is similar to the first scenario. This is not surprising as, to this point, the scenarios are the same.
-* During the recovery period, the transaction rates and response times were restored, although during this time there was a lot more volatility in the figures. This is most probably due to the additional work that the nodes in the cluster are performing, providing the data to restore the missing shards. This additional work is evident in the CPU utilization, disk activity, and network activity graphs.
-* The CPU utilization graph for nodes 0 and 1 shows reduced activity during the recovery phase, this is due to the increased disk and network activity caused by the recovery process. In the first scenario, only the node being recovered exhibited this behavior, but in this scenario it seems likely that most of the missing data for node 0 is being restored from node 1.
-* The I/O activity for node 0 is actually reduced compared to the first scenario. This could be due to the I/O efficiencies of simply copying the data for an entire shard rather than the series of smaller I/O requests required to bring an existing shard up to date.
-* The network activity for all three nodes indicate bursts of activity as data is transmitted and received between nodes. In scenario 1, only node 0 exhibited as much network activity, but this activity seemed to be sustained for a longer period. Again, this difference could be due to the efficiencies of transmitting the entire data for a shard as a single request rather than the series of smaller requests received when recovering a shard.
-
-## Node failure and restart with shard reallocation: results
-
-
-The file [ElasticsearchRecoveryScenario3.pdf](https://github.com/mspnp/elasticsearch/blob/master/figures/Elasticsearch/ElasticSearchRecoveryScenario3.pdf) illustrates the results of this test. As with the first test, the initial part of the graphs show the system running normally for approximately 20 minutes, at which point node 0 is shut down for 5 minutes. At this point, the Elasticsearch cluster attempts to recreate the missing shards and rebalance the shards across the remaining nodes. After 5 minutes node 0 is brought back online, and once again the cluster has to rebalance the shards. Performance is restored after 12-15 minutes.
-
-Note the following points:
-
-* During the test, no errors were reported. No data was lost, and all operations completed successfully.
-* The transaction rates for all three types of operation (bulk insert, aggregate query, and filter query) dropped and the average response times increased significantly while node 0 was offline compared to the previous two tests. This is due to the increased cluster activity recreating the missing shards and rebalancing the cluster as evidenced by the raised figures for disk and network activity for nodes 1 and 2 in this period.
-* During the period after node 0 is brought back online, the transaction rates and response times remain volatile.
-* The CPU utilization and disk activity graphs for node 0 shows very reduced initial action during the recovery phase. This is because at this point, node 0 is not serving any data. After a period of approximately 5 minutes, the node bursts into action > as shown by the sudden increase in network, disk, and CPU activity. This is most likely caused by the cluster redistributing shards across nodes. Node 0 then shows normal activity.
-
-## Rolling updates: results
-
-
-The results of this test, in the file [ElasticsearchRecoveryScenario4.pdf](https://github.com/mspnp/elasticsearch/blob/master/figures/Elasticsearch/ElasticSearchRecoveryScenario4.pdf), show how each node is taken offline and then brought back up again in succession. Each node is shut down for 5 minutes before being restarted at which point the next node in sequence is stopped.
-
-Note the following points:
-
-* While each node is cycled, the performance in terms of throughput and response times remains reasonably even.
-* Disk activity increases for each node for a short time as it is brought back online. This is most probably due to the recovery process rolling forward any changes that have occurred while the node was down.
-* When a node is taken offline, spikes in network activity occur in the remaining nodes. Spikes also occur when a node is restarted.
-* After the final node is recycled, the system enters a period of significant volatility. This is most likely caused by the recovery process having to synchronize changes across every node and ensure that all replicas and their corresponding shards are consistent. At one point, this effort causes successive bulk insert operations to timeout and fail. The errors reported each case were:
-
-```
-Failure -- BulkDataInsertTest17(org.apache.jmeter.protocol.java.sampler.JUnitSampler$AnnotatedTestCase): java.lang.AssertionError: failure in bulk execution:
-[1]: index [systwo], type [logs], id [AVEg0JwjRKxX_sVoNrte], message [UnavailableShardsException[[systwo][2] Primary shard is not active or isn't assigned to a known node. Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@787cc3cd]]
-
-```
-
-Subsequent experimentation showed that introducing a delay of a few minutes between cycling each node eliminated this error, so it was most likely caused by contention between the recovery process attempting to restore several nodes simultaneously and the bulk insert operations trying to store thousands of new documents.
-
-## Summary
-The tests performed indicated that:
-
-* Elasticsearch was highly resilient to the most common modes of failure likely to occur in a cluster.
-* Elasticsearch can recover quickly if a well-designed cluster is subject to catastrophic data loss on a node. This can happen if you configure Elasticsearch to save data to ephemeral storage and the node is subsequently reprovisioned after a restart. These results show that even in this case, the risks of using ephemeral storage are most likely outweighed by the performance benefits that this class of storage provides.
-* In the first three scenarios, no errors occurred in concurrent bulk insert, aggregation, and filter query workloads while a node was taken offline and recovered.
-* Only the last scenario indicated potential data loss, and this loss only affected new data being added. It is good practice in applications performing data ingestion to mitigate this likelihood by retrying insert operations that have failed as the type of error reported is highly likely to be transient.
-* The results of the last test also show that if you are performing planned maintenance of the nodes in a cluster, performance will benefit if you allow several minutes between cycling one node and the next. In an unplanned situation (such as the datacenter recycling nodes after performing an operating system update), you have less control over how and when nodes are taken down and restarted. The contention that arises when Elasticsearch attempts to recover the state of the cluster after sequential node outages can result in timeouts and errors.
-
-[Manage the Availability of Virtual Machines]: /azure/articles/virtual-machines/virtual-machines-linux-manage-availability/
-[Running the Automated Elasticsearch Resiliency Tests]: automated-resilience-tests.md
diff --git a/docs/elasticsearch/toc.yml b/docs/elasticsearch/toc.yml
deleted file mode 100644
index 8e05c0e82f8..00000000000
--- a/docs/elasticsearch/toc.yml
+++ /dev/null
@@ -1,23 +0,0 @@
-- name: Azure Architecture Center
- href: /azure/architecture
-
-- name: Elasticsearch on Azure
- items:
- - name: Get started
- href: index.md
- - name: Tune data ingestion performance
- href: data-ingestion-performance.md
- - name: Tune data aggregation and query performance
- href: data-aggregation-and-query-performance.md
- - name: Configure resilience and recovery
- href: resilience-and-recovery.md
- - name: Create a performance testing environment
- href: performance-testing-environment.md
- - name: Implement a JMeter test plan
- href: jmeter-test-plan.md
- - name: Deploy a JMeter JUnit sampler for testing
- href: jmeter-junit-sampler.md
- - name: Run the resiliency tests
- href: automated-resilience-tests.md
- - name: Run the performance tests
- href: automated-performance-tests.md
\ No newline at end of file
diff --git a/docs/toc.yml b/docs/toc.yml
index 05919e1f83a..8941328804b 100644
--- a/docs/toc.yml
+++ b/docs/toc.yml
@@ -144,8 +144,6 @@
href: service-fabric/migrate-from-cloud-services.md
- name: Refactor an Azure Service Fabric Application migrated from Azure Cloud Services
href: service-fabric/refactor-migrated-app.md
- - name: Run Elasticsearch on Azure
- href: elasticsearch/index.md
- name: Extend Azure Resource Manager Template Functionality
href: building-blocks/extending-templates/index.md