:8080
+```
+
+```html
+
+
+
+ Welcome to nginx!
+
+
+
+ Welcome to nginx!
+
+ If you see this page, the nginx web server is successfully installed and
+ working. Further configuration is required.
+
+
+
+ For online documentation and support please refer to
+ nginx.org.
+ Commercial support is available at
+ nginx.com.
+
+
+ Thank you for using nginx.
+
+
+```
+
+## Next steps
+
+The F5 BIG-IP AS3 service discovery integration with Consul queries Consul's
+catalog on a regular, configurable basis to get updates about changes for a
+given service, and adjusts the node pools dynamically without operator
+intervention.
+
+In this tutorial you configured an F5 BIG-IP instance to natively integrate with
+Consul for service discovery. You were able to monitor dynamic node registration
+for a web server pool member and test it with a virtual server.
+
+As a follow up, you can add or remove web server nodes registered with Consul
+and validate that the network map on the F5 BIG-IP updates automatically.
diff --git a/website/content/docs/discover/load-balancer/index.mdx b/website/content/docs/discover/load-balancer/index.mdx
new file mode 100644
index 00000000000..d2825da5672
--- /dev/null
+++ b/website/content/docs/discover/load-balancer/index.mdx
@@ -0,0 +1,10 @@
+---
+layout: docs
+page_title: Application load balancing
+description: >-
+ Learn how to use standard Consul DNS addresses to load balance requests between services to healthy instances.
+---
+
+# Application load balancing
+
+Editor's Note: This empty page represents a known content gap between our existing documentation and the refreshed documentation.
diff --git a/website/content/docs/discover/load-balancer/nginx.mdx b/website/content/docs/discover/load-balancer/nginx.mdx
new file mode 100644
index 00000000000..cfcbc0a4b80
--- /dev/null
+++ b/website/content/docs/discover/load-balancer/nginx.mdx
@@ -0,0 +1,416 @@
+---
+layout: docs
+page_title: Load Balancing with NGINX and Consul Template
+description: >-
+ Use Consul template to update NGINX load balancer configurations based on changes in Consul service discovery.
+---
+
+# Load Balancing with NGINX and Consul Template
+
+This tutorial describes how to use Consul and Consul template to automatically
+update an NGINX configuration file with the latest list of backend servers using
+Consul's service discovery.
+
+While following this tutorial you will:
+
+- Register an example service with Consul
+- Configure Consul template
+- Create an NGINX load balancer configuration template
+- Run Consul template
+- Scale your servers
+- Test Consul's health checks
+
+Once you have completed this tutorial you will end up with an architecture like the
+one diagramed below. NGINX will get a list of healthy servers from Consul
+service discovery via Consul template and will balance internet traffic to those
+servers according to its own configuration.
+
+
+
+## Prerequisites
+
+To complete this tutorial you will need:
+
+- A Consul cluster. We recommend three Consul server nodes
+
+- A minimum of two application servers registered to Consul service discovery
+ with a Consul client agent running on the node (We assume a standard web
+ server listening on HTTP port 80 in the following examples)
+
+- A node running NGINX
+
+- A Consul client agent on the NGINX node
+
+- [Consul-template](https://github.com/hashicorp/consul-template#installation)
+ on the NGINX node to keep the NGINX configuration file updated
+
+
+
+ The content of this tutorial also applies to Consul clusters hosted on HashiCorp Cloud (HCP).
+
+
+
+## Register your web servers to Consul
+
+If you haven't already registered your web servers in Consul Service Registry
+create a service definition for your web service in Consul's configuration
+directory `/etc/consul.d/`.
+
+Create a service registration file for the `web` service with the following content.
+
+
+
+
+
+```hcl
+service {
+ name = "web"
+ port = 80
+ check {
+ args = ["curl", "localhost"]
+ interval = "3s"
+ }
+}
+```
+
+
+
+
+
+```json
+{
+ "service": {
+ "Name": "web",
+ "Port": 80,
+ "check": {
+ "args": ["curl", "localhost"],
+ "interval": "3s"
+ }
+ }
+}
+```
+
+
+
+
+
+-> Since this service definition contains a basic "curl" health check for a web
+server, `enable_local_script_checks` must be set to `true` in the configuration
+of the Consul agent where the web server is running.
+
+Reload the local Consul agent to read the new service definition.
+
+```shell-session
+$ consul reload
+```
+
+After registering the service it will appear in Consul's Service Registry.
+
+
+
+After repeating the registration step for all your web server instances, all
+instances will appear in the instances view of the "web" service.
+
+## Configure Consul template
+
+A Consul template configuration file will specify which input template to use,
+which output file to generate, and which command will reload Consul template's
+target application (NGINX in this tutorial) with its new configuration.
+
+Create a configuration file called `consul-template-config.hcl` with the
+following content.
+
+
+
+```hcl
+consul {
+ address = "localhost:8500"
+
+ retry {
+ enabled = true
+ attempts = 12
+ backoff = "250ms"
+ }
+}
+template {
+ source = "/etc/nginx/conf.d/load-balancer.conf.ctmpl"
+ destination = "/etc/nginx/conf.d/load-balancer.conf"
+ perms = 0600
+ command = "service nginx reload"
+}
+```
+
+
+
+The `consul` stanza tells consul-template, where to find the Consul API, which
+is located on localhost, as we are running a Consul client agent on the same
+node as our NGINX instance.
+
+The `template` stanza tells Consul template:
+
+- Where the `source` (input) template file will be located, in this case
+ `/etc/nginx/conf.d/load-balancer.conf.ctmpl`
+
+- Where the destination (output) file should be located, in this case
+ `/etc/nginx/conf.d/load-balancer.conf`. (This is a default path that NGINX uses
+ to read its configuration. You will either use it or `/usr/local/nginx/conf/`
+ depending on your NGINX distribution.)
+
+- Which `permissions` the destination file needs
+
+- Which command to run after rendering the destination file. In this case,
+ `service nginx reload` will trigger NGINX to reload its configuration
+
+For all available configuration options for Consul template, please see the
+[GitHub repo](https://github.com/hashicorp/consul-template).
+
+## Create an input template
+
+Now you will create the basic NGINX Load Balancer configuration template which
+Consul template will use to render the final `load-balancer.conf` for your NGINX
+load balancer instance.
+
+Create a template file called `load-balancer.conf.ctmpl` in the location you
+specified as a `source` (in this example, `/etc/nginx/conf.d/`) with the
+following content:
+
+
+
+```go
+upstream backend {
+{{- range service "web" }}
+ server {{ .Address }}:{{ .Port }};
+{{- end }}
+}
+
+server {
+ listen 80;
+
+ location / {
+ proxy_pass http://backend;
+ }
+}
+```
+
+
+
+Instead of explicitly putting your backend server IP addresses directly in the
+load balancer configuration file, you will use Consul template's templating
+language to specify some variables in this file, which will automatically fetch
+the final values from Consul's Service Registry and render the final load
+balancer configuration file.
+
+Specifically, the below snippet from the above template file tells Consul
+template to query Consul's Service Registry for all healthy nodes of the "web"
+service in the current data center, and put the IP addresses and service ports
+of those endpoints in the generated output configuration file.
+
+```go
+{{- range service "web" }}
+ server {{ .Address }}:{{ .Port }};
+{{- end }}
+```
+
+For all available options on how to build template files for use with
+Consul template, please see the [GitHub
+repo](https://github.com/hashicorp/consul-template).
+
+### Clean up NGINX default sites config
+
+To make sure your NGINX instance will act as a load balancer and not as a web
+server delete the following file if it exists.
+
+```nginx
+/etc/nginx/sites-enabled/default
+```
+
+Then reload the NGINX service.
+
+```shell-session
+$ service nginx reload
+```
+
+Now your NGINX load balancer should not have any configuration and you should
+not see a web page when browsing to your NGINX IP address.
+
+## Run Consul template
+
+Start Consul template with the earlier generated configuration file.
+
+```shell-session
+$ consul-template -config=consul-template-config.hcl
+```
+
+This will start Consul template running in the foreground until you stop the
+process. It will automatically connect to Consul's API, render the NGINX
+configuration for you and reload the NGINX service.
+
+Your NGINX load balancer should now serve traffic and perform simple round-robin
+load balancing amongst all of your registered and healthy "web" server
+instances.
+
+The resulting load balancer configuration located at
+`/etc/nginx/conf.d/load-balancer.conf` should look like this:
+
+
+
+```nginx
+upstream backend {
+ server 192.168.43.101:80;
+ server 192.168.43.102:80;
+}
+
+server {
+ listen 80;
+
+ location / {
+ proxy_pass http://backend;
+ }
+}
+```
+
+
+
+Notice that Consul template filled the variables from the template file with
+actual IP addresses and ports of your "web" servers.
+
+## Verify your implementation
+
+Now that everything is set up and running, test out your implementation by
+watching what happens when you scale or stop your services. In both cases,
+Consul template should keep NGINX's configuration up to date.
+
+### Scale your backend services
+
+Consul template uses a long-lived HTTP query (blocking query) against Consul's
+API and will get an immediate notification about updates to the requested
+service "web".
+
+As soon as you scale your "web" and the new instances register themselves to the
+Consul Service Registry, Consul template will see the change in your service and
+trigger a new generation of the configuration file.
+
+After scaling your backend server from two to three instances, the resulting
+load balancer configuration for your NGINX instance located at
+`/etc/nginx/conf.d/load-balancer.conf` should look like this:
+
+
+
+```nginx
+upstream backend {
+ server 192.168.43.101:80;
+ server 192.168.43.102:80;
+ server 192.168.43.103:80;
+}
+
+server {
+ listen 80;
+
+ location / {
+ proxy_pass http://backend;
+ }
+}
+```
+
+
+
+### Cause an error in a web service instance
+
+Not only will Consul template update your backend configuration automatically
+depending on available service endpoints, but it will also only use healthy
+endpoints when rendering the final configuration.
+
+You configured Consul to perform a basic curl-based health check in your service
+definition, so Consul will notice if a "web" server instance is in an unhealthy
+state.
+
+To simulate an error and see how Consul health checks are working, stop one
+instance of the web process.
+
+```shell-session
+$ service nginx stop
+```
+
+You will see the state of this service instance as "Unhealthy" in the Consul UI
+because no service on this node is responding to requests on HTTP port 80.
+
+
+
+Because of its blocking query against Consul's API, Consul template will be
+notified immediately that a change in the health of one of the service endpoints
+occurred and will re-render a new Load Balancer configuration file excluding the
+unhealthy service instance:
+
+
+
+```nginx
+upstream backend {
+ server 192.168.43.101:80;
+ server 192.168.43.103:80;
+}
+
+server {
+ listen 80;
+
+ location / {
+ proxy_pass http://backend;
+ }
+}
+```
+
+
+
+Your NGINX instance will now only balance traffic between the remaining healthy
+service endpoints.
+
+As soon as you restart your stopped "web" server instance and Consul health
+check marks the service endpoint as "Healthy" again, the process automatically
+starts over and the instance will be included in the load balancer backend
+configuration to serve traffic:
+
+
+
+```nginx
+upstream backend {
+ server 192.168.43.101:80;
+ server 192.168.43.102:80;
+ server 192.168.43.103:80;
+}
+
+server {
+ listen 80;
+
+ location / {
+ proxy_pass http://backend;
+ }
+}
+```
+
+
+
+-> Consul health checks can be much more sophisticated. They can check CPU or
+RAM utilization or other service metrics, which you are not able to monitor from
+a central Load Balancing instance. You can learn more about Consul's Health
+Check feature [here](/consul/tutorials/developer-discovery/service-registration-health-checks).
+
+## Next steps
+
+In this tutorial you discovered how Consul template can generate the configuration
+for your NGINX load balancer based on available and healthy service endpoints
+registered in Consul's Service Registry. You learned how to scale up and down
+services without manually reconfiguring your NGINX load balancer every time a
+new service endpoint was started or deleted.
+
+You learned how Consul template uses blocking queries against Consul's HTTP API
+to get immediate notifications about service changes and re-renders the required
+configuration files automatically for you.
+
+This tutorial described how to use consul template to configure NGINX, but a
+similar process would apply for other load balancers as well. To use another
+load balancer you will need to replace the NGINX input template, output file,
+and reload command, as well as any NGINX CLI commands.
+
+Learn more about Consul [service registration and
+discovery](/consul/tutorials/get-started-vms/virtual-machine-gs-service-discovery) and
+[Consul
+template](/consul/tutorials/developer-configuration/consul-template).
\ No newline at end of file
diff --git a/website/content/docs/services/discovery/dns-dynamic-lookups.mdx b/website/content/docs/discover/service/dynamic.mdx
similarity index 50%
rename from website/content/docs/services/discovery/dns-dynamic-lookups.mdx
rename to website/content/docs/discover/service/dynamic.mdx
index b56d3ce7074..fc7a0e59ea6 100644
--- a/website/content/docs/services/discovery/dns-dynamic-lookups.mdx
+++ b/website/content/docs/discover/service/dynamic.mdx
@@ -1,17 +1,17 @@
---
layout: docs
-page_title: Enable dynamic DNS queries
-description: ->
- Learn how to dynamically query the Consul DNS using prepared queries, which enable robust service and node lookups.
+page_title: Perform dynamic DNS service lookups with prepared queries
+description: >-
+ Learn how to dynamically query the Consul DNS using prepared queries, which enable robust service and node lookups.
---
-# Enable dynamic DNS queries
+# Perform dynamic service lookups with prepared queries
-This topic describes how to dynamically query the Consul catalog using prepared queries. Prepared queries are configurations that enable you to register a complex service query and execute it on demand. For information about how to perform standard node and service lookups, refer to [Perform Static DNS Queries](/consul/docs/services/discovery/dns-static-lookups).
+This topic describes how to dynamically query the Consul catalog using prepared queries. Prepared queries are configurations that let you register a complex service query and execute it on demand. For information about how to perform standard node and service lookups, refer to [Perform static DNS queries](/consul/docs/discover/dns/static).
## Introduction
-Prepared queries provide a rich set of lookup features, such as filtering by multiple tags and automatically failing over to look for services in remote datacenters if no healthy nodes are available in the local datacenter. You can also create prepared query templates that match names using a prefix match, allowing a single template to apply to potentially many services. Refer to [Query Consul Nodes and Services Overview](/consul/docs/services/discovery/dns-overview) for additional information about DNS query behaviors.
+Prepared queries provide a rich set of lookup features, such as filtering by multiple tags and automatically failing over to look for services in remote datacenters if no healthy nodes are available in the local datacenter. You can also create prepared query templates that match names using a prefix match, allowing a single template to apply to potentially many services. Refer to [Consul DNS overview](/consul/docs/discover/dns) for additional information about DNS query behaviors.
## Requirements
@@ -21,9 +21,9 @@ Consul 0.6.4 or later is required to create prepared query templates.
If ACLs are enabled, the querying service must present a token linked to permissions that enable read access for query, service, and node resources. Refer to the following documentation for information about creating policies to enable read access to the necessary resources:
-- [Prepared query rules](/consul/docs/security/acl/acl-rules#prepared-query-rules)
-- [Service rules](/consul/docs/security/acl/acl-rules#service-rules)
-- [Node rules](/consul/docs/security/acl/acl-rules#node-rules)
+- [Prepared query rules](/consul/docs/secure-consul/acl/rules#prepared-query-rules)
+- [Service rules](/consul/docs/secure-consul/acl/rules#service-rules)
+- [Node rules](/consul/docs/secure-consul/acl/rules#node-rules)
## Create prepared queries
@@ -31,53 +31,53 @@ Refer to the [prepared query reference](/consul/api-docs/query#create-prepared-q
1. Specify the prepared query options in JSON format. The following prepared query targets all instances of the `redis` service in `dc1` and `dc2`:
-
-
- ```json
- {
- "Name": "my-query",
- "Session": "adf4238a-882b-9ddc-4a9d-5b6758e4159e",
- "Token": "",
- "Service": {
- "Service": "redis",
- "Failover": {
- "NearestN": 3,
- "Datacenters": ["dc1", "dc2"]
- },
- "Near": "node1",
- "OnlyPassing": false,
- "Tags": ["primary", "!experimental"],
- "NodeMeta": {
- "instance_type": "m3.large"
- },
- "ServiceMeta": {
- "environment": "production"
- }
+
+
+ ```json
+ {
+ "Name": "my-query",
+ "Session": "adf4238a-882b-9ddc-4a9d-5b6758e4159e",
+ "Token": "",
+ "Service": {
+ "Service": "redis",
+ "Failover": {
+ "NearestN": 3,
+ "Datacenters": ["dc1", "dc2"]
},
- "DNS": {
- "TTL": "10s"
+ "Near": "node1",
+ "OnlyPassing": false,
+ "Tags": ["primary", "!experimental"],
+ "NodeMeta": {
+ "instance_type": "m3.large"
+ },
+ "ServiceMeta": {
+ "environment": "production"
}
- }
- ```
+ },
+ "DNS": {
+ "TTL": "10s"
+ }
+ }
+ ```
-
+
- Refer to the [prepared query configuration reference](/consul/api-docs/query#create-prepared-query) for information about all available options.
+ Refer to the [prepared query configuration reference](/consul/api-docs/query#create-prepared-query) for information about all available options.
1. Send the query in a POST request to the [`/query` API endpoint](/consul/api-docs/query). If the request is successful, Consul prints an ID for the prepared query.
- In the following example, the prepared query configuration is stored in the `payload.json` file:
+ In the following example, the prepared query configuration is stored in the `payload.json` file:
- ```shell-session
- $ curl --request POST --data @payload.json http://127.0.0.1:8500/v1/query
- {"ID":"014af5ff-29e6-e972-dcf8-6ee602137127"}%
- ```
+ ```shell-session
+ $ curl --request POST --data @payload.json http://127.0.0.1:8500/v1/query
+ {"ID":"014af5ff-29e6-e972-dcf8-6ee602137127"}%
+ ```
1. To run the query, send a GET request to the endpoint and specify the ID returned from the POST call.
- ```shell-session
- $ curl http://127.0.0.1:8500/v1/query/14af5ff-29e6-e972-dcf8-6ee602137127/execute\?near\=_agent
- ```
+ ```shell-session
+ $ curl http://127.0.0.1:8500/v1/query/14af5ff-29e6-e972-dcf8-6ee602137127/execute\?near\=_agent
+ ```
## Execute prepared queries
@@ -91,7 +91,7 @@ Use the following format to execute a prepared query using the standard lookup f
.query[.].
```
-Refer [Standard lookups](/consul/docs/services/discovery/dns-static-lookups#standard-lookups) for additional information about the standard lookup format in Consul.
+Refer [Standard lookup](/consul/docs/discover/dns/static-lookups#standard-lookups) for additional information about the standard lookup format in Consul.
### RFC 2782 SRV lookup
@@ -101,7 +101,7 @@ Use the following format to execute a prepared query using the RFC 2782 lookup f
_._tcp.query[.].
```
-For additional information about following the RFC 2782 SRV lookup format in Consul, refer to [RFC 2782 Lookup](/consul/docs/services/discovery/dns-static-lookups#rfc-2782-lookup). For general information about the RFC 2782 specification, refer to [A DNS RR for specifying the location of services \(DNS SRV\)](https://tools.ietf.org/html/rfc2782).
+For additional information about following the RFC 2782 SRV lookup format in Consul, refer to [RFC 2782 Lookup](/consul/docs/discover/dns/static-lookup#rfc-2782-lookup). For general information about the RFC 2782 specification, refer to [A DNS RR for specifying the location of services \(DNS SRV\)](https://tools.ietf.org/html/rfc2782).
### Lookup options
@@ -109,6 +109,6 @@ The `datacenter` subdomain is optional. By default, the lookup queries the datac
The `query name` or `id` subdomain is the name or ID of an existing prepared query.
-## Results
+## Query results
To allow for simple load balancing, Consul returns the set of nodes in random order for each query. Prepared queries support A and SRV records. SRV records provide the port that a service is registered. Consul only serves SRV records if the client specifically requests them.
diff --git a/website/content/docs/services/discovery/dns-static-lookups.mdx b/website/content/docs/discover/service/static.mdx
similarity index 79%
rename from website/content/docs/services/discovery/dns-static-lookups.mdx
rename to website/content/docs/discover/service/static.mdx
index 74807c756ad..61bb197330d 100644
--- a/website/content/docs/services/discovery/dns-static-lookups.mdx
+++ b/website/content/docs/discover/service/static.mdx
@@ -1,25 +1,28 @@
---
layout: docs
page_title: Perform static DNS queries
-description: ->
- Learn how to use standard Consul DNS lookup formats to enable service discovery for services and nodes.
+description: >-
+ Learn how to use standard Consul DNS lookup formats to enable service discovery for services and nodes.
---
# Perform static DNS queries
-This topic describes how to query the Consul DNS to look up nodes and services registered with Consul. Refer to [Enable Dynamic DNS Queries](/consul/docs/services/discovery/dns-dynamic-lookups) for information about using prepared queries.
+
+This topic describes how to query the Consul DNS to look up nodes and services registered with Consul. Refer to [Perform dynamic DNS queries](/consul/docs/discover/dns/dynamic-lookup) for information about using prepared queries.
## Introduction
-Node lookups and service lookups are the fundamental types of queries you can perform using the Consul DNS. Node lookups interrogate the catalog for named Consul agents. Service lookups interrogate the catalog for services registered with Consul. Refer to [DNS Usage Overview](/consul/docs/services/discovery/dns-overview) for additional background information.
+
+Node lookups and service lookups are the fundamental types of queries you can perform using the Consul DNS. Node lookups query the catalog for named Consul agents. Service lookups query the catalog for services registered with Consul. Refer to [DNS Usage Overview](/consul/docs/discover/dns) for additional background information.
## Requirements
+
All versions of Consul support DNS lookup features.
### ACLs
-If ACLs are enabled, you must present a token linked with the necessary policies. We recommend using a separate token in production deployments for querying the DNS. By default, Consul agents resolve DNS requests using the preconfigured tokens in order of precedence:
-The agent's [`default` token](/consul/docs/agent/config/config-files#acl_tokens_default)
-The built-in [`anonymous` token](/consul/docs/security/acl/tokens#built-in-tokens).
+If ACLs are enabled, you must present a token linked with the necessary policies. We recommend using a separate token in production deployments for querying the DNS. By default, Consul agents resolve DNS requests using the preconfigured tokens in order of precedence:
+1. The agent's [`default` token](/consul/docs/reference/agent#acl_tokens_default)
+1. The built-in [`anonymous` token](/consul/docs/secure-consul/acl/token#built-in-tokens).
The following table describes the available DNS lookups and required policies when ACLs are enabled:
@@ -28,9 +31,8 @@ The following table describes the available DNS lookups and required policies wh
| `*.node.consul` | Node | Allows Consul to resolve DNS requests for the target node. Example: `.node.consul` | `node:read` |
| `*.service.consul`
`*.connect.consul`
`*.ingress.consul`
`*.virtual.consul` |Service: standard | Allows Consul to resolve DNS requests for target service instances running on ACL-authorized nodes. Example: `.service.consul` | `service:read`
`node:read` |
-> **Tutorials**: For hands-on guidance on how to configure an appropriate token for DNS, refer to the tutorial for [Production Environments](/consul/tutorials/security/access-control-setup-production#token-for-dns) and [Development Environments](/consul/tutorials/day-0/access-control-setup#enable-acls-on-consul-clients).
-
## Node lookups
+
Specify the name of the node, datacenter, and domain using the following FQDN syntax:
```text
@@ -39,7 +41,7 @@ Specify the name of the node, datacenter, and domain using the following FQDN sy
The `datacenter` subdomain is optional. By default, the lookup queries the datacenter of the agent.
-By default, the domain is `consul`. Refer to [Configure DNS Behaviors](/consul/docs/services/discovery/dns-configuration) for information about using alternate domains.
+By default, the domain is `consul`. Refer to [Configure Consul DNS behavior](/consul/docs/discover/dns/configure) for information about using alternate domains.
### Node lookup results
@@ -51,7 +53,6 @@ The following example lookup queries the `foo` node in the `default` datacenter:
```shell-session
$ dig @127.0.0.1 -p 8600 foo.node.consul ANY
-
; <<>> DiG 9.8.3-P1 <<>> @127.0.0.1 -p 8600 foo.node.consul ANY
; (1 server found)
;; global options: +cmd
@@ -68,54 +69,54 @@ foo.node.consul. 0 IN A 10.1.10.12
foo.node.consul. 0 IN TXT "meta_key=meta_value"
foo.node.consul. 0 IN TXT "value only"
-
;; AUTHORITY SECTION:
consul. 0 IN SOA ns.consul. postmaster.consul. 1392836399 3600 600 86400 0
```
### Node lookups for Consul Enterprise
-Consul Enterprise includes the admin partition concept, which is an abstraction that lets you define isolated administrative network areas. Refer to [Admin Partitions](/consul/docs/enterprise/admin-partitions) for additional information.
+Consul Enterprise includes the admin partition concept, which is an abstraction that lets you define isolated administrative network areas. Refer to [Admin partitions](/consul/docs/enterprise/admin-partitions) for additional information.
Consul nodes reside in admin partitions within a datacenter. By default, node lookups query the same partition and datacenter of the Consul agent that received the DNS query.
Use the following query format to specify a partition for a node lookup:
-```
+```text
.node[..ap][..dc].
```
Consul server agents are in the `default` partition. If you send a DNS query to Consul server agents, you must explicitly specify the partition of the target node if it is not `default`.
## Service lookups
+
You can query the network for service providers using either the [standard lookup](#standard-lookup) method or [strict RFC 2782 lookup](#rfc-2782-lookup) method.
-By default, all SRV records are weighted equally in service lookup responses, but you can configure the weights using the [`Weights`](/consul/docs/services/configuration/services-configuration-reference#weights) attribute of the service definition. Refer to [Define Services](/consul/docs/services/usage/define-services) for additional information.
+By default, all SRV records are weighted equally in service lookup responses, but you can configure the weights using the [`Weights`](/consul/docs/reference/service#weights) attribute of the service definition. Refer to [Define Services](/consul/docs/register/service/vm/define) for additional information.
The DNS protocol limits the size of requests, even when performing DNS TCP queries, which may affect your experience querying for services. For services with more than 500 instances, you may not be able to retrieve the complete list of instances for the service. Refer to [RFC 1035, Domain Names - Implementation and Specification](https://datatracker.ietf.org/doc/html/rfc1035#section-2.3.4) for additional information.
Consul randomizes DNS SRV records and ignores weights specified in service configurations when printing responses. If records are truncated, each client using weighted SRV responses may have partial and inconsistent views of instance weights. As a result, the request distribution may be skewed from the intended weights. We recommend calling the [`/catalog/nodes` API endpoint](/consul/api-docs/catalog#list-nodes) to retrieve the complete list of nodes. You can apply query parameters to API calls to sort and filter the results.
### Standard lookups
+
To perform standard service lookups, specify tags, the name of the service, datacenter, cluster peer, and domain using the following syntax to query for service providers:
```text
[.].service[..dc][..peer][..sg].
```
-The `tag` subdomain is optional. It filters responses so that only service providers containing the tag appear.
-
-The `datacenter` subdomain is optional. By default, Consul interrogates the querying agent's datacenter.
+- The `tag` subdomain is optional. It filters responses so that only service providers containing the tag appear.
-The `cluster-peer` name is optional, and specifies the [cluster peer](/consul/docs/connect/cluster-peering) whose [exported services](/consul/docs/connect/config-entries/exported-services) should be the target of the query.
+- The `datacenter` subdomain is optional. By default, Consul interrogates the querying agent's datacenter.
-The `sameness-group` name is optional, and specifies the [sameness group](/consul/docs/connect/cluster-peering/usage/create-sameness-groups) that should be the target of the query. When Consul receives a DNS request for a service that is a member of a sameness group and the sameness groups is configured with `DefaultForFailover` set to `true`, it returns service instances from the first healthy member of the sameness group. If the local partition is a member of a sameness group, local service instances take precedence over the members of its sameness group. Optionally, you can include a namespace or admin partition when performing a lookup on a sameness group.
+- The `cluster-peer` name is optional, and specifies the [cluster peer](/consul/docs/connect/cluster-peering) whose [exported services](/consul/docs/connect/config-entries/exported-services) should be the target of the query.
-Only sameness groups with `DefaultForFailover` set `true` can be queried through DNS. If `DefaultForFailover` is not true, then Consul DNS returns an error response. Refer to [Service lookups for Consul Enterprise](#service-lookups-for-consul-enterprise) for more information.
+- The `sameness-group` name is optional, and specifies the [sameness group](/consul/docs/east-west/cluster-peering/usage/create-sameness-groups) that should be the target of the query. When Consul receives a DNS request for a service that is tied to a sameness group, it returns service instances from the first healthy member of the sameness group. If the local partition is a member of a sameness group, its service instances take precedence over the members of its sameness group. Optionally, you can include a namespace or admin partition when performing a lookup on a sameness group. Refer to [Service lookups for Consul Enterprise](#service-lookups-for-consul-enterprise) for more information.
-By default, the lookups query in the `consul` domain. Refer to [Configure DNS Behaviors](/consul/docs/services/discovery/dns-configuration) for information about using alternate domains.
+By default, the lookups query in the `consul` domain. Refer to [Configure Consul DNS behavior](/consul/docs/discover/dns/configure) for information about using alternate domains.
#### Standard lookup results
+
Standard services queries return A and SRV records. SRV records include the port that the service is registered on. SRV records are only served if the client specifically requests them.
Services that fail their health check or that fail a node system check are omitted from the results. As a load balancing measure, Consul randomizes the set of nodes returned in the response. These mechanisms help you use DNS with application-level retries as the foundation for a self-healing service-oriented architecture.
@@ -167,6 +168,7 @@ primary.postgresql.service.dc2.consul. 0 IN A 10.1.10.12
```
### RFC 2782 lookup
+
Per [RFC 2782](https://tools.ietf.org/html/rfc2782), SRV queries must prepend `service` and `protocol` values with an underscore (`_`) to prevent DNS collisions. Use the following syntax to perform RFC 2782 lookups:
```text
@@ -183,7 +185,6 @@ The following example queries the `rabbitmq` service tagged with `amqp`, which r
```shell-session
$ dig @127.0.0.1 -p 8600 _rabbitmq._amqp.service.consul SRV
-
; <<>> DiG 9.8.3-P1 <<>> @127.0.0.1 -p 8600 _rabbitmq._amqp.service.consul ANY
; (1 server found)
;; global options: +cmd
@@ -202,7 +203,7 @@ _rabbitmq._amqp.service.consul. 0 IN SRV 1 1 5672 rabbitmq.node1.dc1.consul.
rabbitmq.node1.dc1.consul. 0 IN A 10.1.11.20
```
-You can also perform RFC 2782 lookups that target a specific [cluster peer](/consul/docs/connect/cluster-peering) or datacenter by including `.dc` or `.peer` in the query labels:
+You can also perform RFC 2782 lookups that target a specific [cluster peer](/consul/docs/east-west/cluster-peering) or datacenter by including `.dc` or `.peer` in the query labels:
```text
_._[.service][..dc][..peer].
@@ -211,8 +212,7 @@ _._[.service][..dc][..peer].
The following example queries the `redis` service tagged with `tcp` for the cluster peer `phx1`, which returns two instances, one at `10.1.11.83:29081` and one at `10.1.11.86:29142`:
```shell-session
-dig @127.0.0.1 -p 8600 _redis._tcp.service.phx1.peer.consul SRV
-
+$ dig @127.0.0.1 -p 8600 _redis._tcp.service.phx1.peer.consul SRV
; <<>> DiG 9.18.15 <<>> @127.0.0.1 -p 8600 _redis._tcp.service.phx1.peer.consul SRV
;; global options: +cmd
;; Got answer:
@@ -235,7 +235,7 @@ _redis._tcp.service.phx1.peer.consul. 0 IN SRV 1 1 29142 0a010d56.addr.consul.
#### SRV responses for hosts in the .addr subdomain
-If a service registered with Consul is configured with an explicit IP address or addresses in the [`address`](/consul/docs/services/configuration/services-configuration-reference#address) or [`tagged_address`](/consul/docs/services/configuration/services-configuration-reference#tagged_address) parameter, then Consul returns the hostname in the target field of the answer section for the DNS SRV query according to the following format:
+If a service registered with Consul is configured with an explicit IP address or addresses in the [`address`](/consul/docs/reference/service#address) or [`tagged_address`](/consul/docs/reference/service#tagged_address) parameter, then Consul returns the hostname in the target field of the answer section for the DNS SRV query according to the following format:
```text
.addr..consul.
@@ -279,7 +279,7 @@ $ dig @127.0.0.1 -p 8600 -t srv _rabbitmq._tcp.service.consul +short
You can convert hex octets to decimals to reveal the IP address. The following example command converts the hostname expressed as `c000020a` into the IPv4 address specified in the service registration.
-```
+```shell-session
$ echo -n "c000020a" | perl -ne 'printf("%vd\n", pack("H*", $_))'
192.0.2.10
```
@@ -313,7 +313,7 @@ services {
-The following example SRV query response contains a single record with a hostname written as a hexadecimal value:
+The following example SRV query response contains a single record with a hostname written as a hexadecimal value.
```shell-session
$ dig @127.0.0.1 -p 8600 -t SRV _rabbitmq._tcp.service.consul +short
@@ -328,6 +328,7 @@ $ echo -n "20010db800010002cafe000000001337" | perl -ne 'printf join(":", unpack
```
### Service lookups for Consul Enterprise
+
You can perform the following types of service lookups to query for services in another namespace, partition, and datacenter:
- `.service`
@@ -379,7 +380,7 @@ This returns the unique virtual IP for any service mesh-capable service. Each se
The peer name is an optional. The DNS uses it to query for the virtual IP of a service imported from the specified peer.
-Consul adds virtual IPs to the [`tagged_addresses`](/consul/docs/services/configuration/services-configuration-reference#tagged_addresses) field in the service definition under the `consul-virtual` tag.
+Consul adds virtual IPs to the [`tagged_addresses`](/consul/docs/reference/service#tagged_addresses) field in the service definition under the `consul-virtual` tag.
#### Service virtual IP lookups for Consul Enterprise
@@ -392,7 +393,7 @@ To lookup services imported from a partition in another cluster peered to the qu
To lookup services in a cluster peer that have not been imported, refer to [Service lookups for Consul Enterprise](#service-lookups-for-consul-enterprise).
-### Ingress Service Lookups
+### Ingress service lookups
Add the `.ingress` subdomain to your DNS FQDN to find ingress-enabled services:
diff --git a/website/content/docs/discover/vm.mdx b/website/content/docs/discover/vm.mdx
new file mode 100644
index 00000000000..5a66a343509
--- /dev/null
+++ b/website/content/docs/discover/vm.mdx
@@ -0,0 +1,52 @@
+---
+layout: docs
+page_title: Discover services on virtual machines (VMs)
+description: >-
+ This topic provides an overview of the service discovery features and operations enabled on virtual machines by Consul DNS, including application load balancing, static lookups, and prepared queries.
+---
+
+# Discover services on virtual machines (VMs)
+
+This topic provides an overview of Consul service discovery operations on virtual machines. After you register services with Consul, you can address them using Consul DNS to perform application load balancing and static service lookups. You can also create prepared queries for dynamic service lookups and service failover.
+
+## Introduction
+
+When a service registers with Consul, the catalog records the address of each service's node. Consul then updates an instance's catalog entry with the results of each health check it performs. Consul agents replicate catalog information between each other using the [Raft consensus protocol](/consul/docs/architecture/consensus), enabling high availability service networking through any Consul agent.
+
+Consul's service discovery operations use [Consul DNS addresses](consul/docs/discover/dns) to route traffic to healthy service instances and return information about service nodes registered to Consul.
+
+## Application load balancing
+
+@include 'text/descriptions/load-balancer.mdx'
+
+## Static lookups
+
+@include 'text/descriptions/static-query.mdx'
+
+## Prepared queries
+
+@include 'text/descriptions/prepared-query.mdx'
+
+## Tutorials
+
+To get started with Consul's service discovery features on VMs, refer to the following tutorials:
+
+- [Register your services to Consul](/consul/tutorials/get-started-vms/virtual-machine-gs-service-discovery) includes service queries with Consul DNS
+- [Ensure only healthy services are discoverable](/consul/tutorials/developer-discovery/service-registration-health-checks)
+- [DNS caching](/consul/tutorials/networking/dns-caching)
+- [Forward DNS for Consul service discovery](/consul/tutorials/networking/dns-forwarding)
+- [Register external services with Consul service discovery](/consul/tutorials/developer-discovery/service-registration-external-services)
+- [Load Balancing with NGINX Plus' Service Discovery Integration](/consul/tutorials/load-balancing/load-balancing-nginx-plus)
+- [Load Balancing with NGINX and Consul Template](/consul/tutorials/load-balancing/load-balancing-nginx)
+- [Load Balancing with HAProxy Service Discovery Integration](/consul/tutorials/load-balancing/load-balancing-haproxy)
+- [Load balancing with F5 and Consul](/consul/tutorials/load-balancing/load-balancing-f5)
+
+## Reference documentation
+
+For reference material related to Consul's service discovery functions, refer to the following pages:
+
+- [Consul DNS reference](/consul/docs/reference/dns)
+
+## Constraints, limitations, and troubleshooting
+
+@include 'text/limitations/discover.mdx'
\ No newline at end of file
diff --git a/website/content/docs/dynamic-app-config/kv/index.mdx b/website/content/docs/dynamic-app-config/kv/index.mdx
deleted file mode 100644
index 677037321d6..00000000000
--- a/website/content/docs/dynamic-app-config/kv/index.mdx
+++ /dev/null
@@ -1,120 +0,0 @@
----
-layout: docs
-page_title: Key/Value (KV) Store Overview
-description: >-
- Consul includes a KV store for indexed objects, configuration parameters, and metadata that you can use to dynamically configure apps. Learn about accessing and using the KV store to extend Consul's functionality through watches, sessions, and Consul Template.
----
-
-# Key/Value (KV) Store Overview
-
-
-The Consul KV API, CLI, and UI are now considered feature complete and no new feature development is planned for future releases.
-
-
-Consul KV is a core feature of Consul and is installed with the Consul agent.
-Once installed with the agent, it will have reasonable defaults. Consul KV allows
-users to store indexed objects, though its main uses are storing configuration
-parameters and metadata. Please note that it is a simple KV store and is not
-intended to be a full featured datastore (such as DynamoDB) but has some
-similarities to one.
-
-The Consul KV datastore is located on the servers, but can be accessed by any
-agent (client or server). The natively integrated [RPC
-functionality](/consul/docs/architecture) allows clients to forward
-requests to servers, including key/value reads and writes. Part of Consul's
-core design allows data to be replicated automatically across all the servers.
-Having a quorum of servers will decrease the risk of data loss if an outage
-occurs.
-
-If you have not used Consul KV, complete this [Getting Started
-tutorial](/consul/tutorials/interactive/get-started-key-value-store?utm_source=docs) on HashiCorp.
-
-## Accessing the KV store
-
-The KV store can be accessed by the [consul kv CLI
-subcommands](/consul/commands/kv), [HTTP API](/consul/api-docs/kv), and Consul UI.
-To restrict access, enable and configure
-[ACLs](/consul/tutorials/security/access-control-setup-production).
-Once the ACL system has been bootstrapped, users and services, will need a
-valid token with KV [privileges](/consul/docs/security/acl/acl-rules#key-value-rules) to
-access the data store, this includes even reads. We recommend creating a
-token with limited privileges, for example, you could create a token with write
-privileges on one key for developers to update the value related to their
-application.
-
-The datastore itself is located on the Consul servers in the [data
-directory](/consul/docs/agent/config/cli-flags#_data_dir). To ensure data is not lost in
-the event of a complete outage, use the [`consul snapshot`](/consul/commands/snapshot/restore) feature to backup the data.
-
-## Using Consul KV
-
-Objects are opaque to Consul, meaning there are no restrictions on the type of
-object stored in a key/value entry. The main restriction on an object is size -
-the maximum is 512 KB. Due to the maximum object size and main use cases, you
-should not need extra storage; the general [sizing
-recommendations](/consul/docs/agent/config/config-files#kv_max_value_size)
-are usually sufficient.
-
-Keys, like objects are not restricted by type and can include any character.
-However, we recommend using URL-safe chars - `[a-zA-Z0-9-._~]` with the
-exception of `/`, which can be used to help organize data. Note, `/` will be
-treated like any other character and is not fixed to the file system. Meaning,
-including `/` in a key does not fix it to a directory structure. This model is
-similar to Amazon S3 buckets. However, `/` is still useful for organizing data
-and when recursively searching within the data store. We also recommend that
-you avoid the use of `*`, `?`, `'`, and `%` because they can cause issues when
-using the API and in shell scripts.
-
-## Using Sentinel to apply policies for Consul KV
-
-
-
-This feature requires
-HashiCorp Cloud Platform (HCP) or self-managed Consul Enterprise.
-
-
-
-You can also use Sentinel as a Policy-as-code framework for defining advanced key-value storage access control policies. Sentinel policies extend the ACL system in Consul beyond static "read", "write",
-and "deny" policies to support full conditional logic and integration with
-external systems. Reference the [Sentinel documentation](https://docs.hashicorp.com/sentinel/concepts) for high-level Sentinel concepts.
-
-To get started with Sentinel in Consul,
-refer to the [Sentinel documentation](https://docs.hashicorp.com/sentinel/consul) or
-[Consul documentation](/consul/docs/agent/sentinel).
-
-
-## Extending Consul KV
-
-### Consul Template
-
-If you plan to use Consul KV as part of your configuration management process
-review the [Consul
-Template](/consul/tutorials/developer-configuration/consul-template?utm_source=docs)
-tutorial on how to update configuration based on value updates in the KV. Consul
-Template is based on Go Templates and allows for a series of scripted actions
-to be initiated on value changes to a Consul key.
-
-### Watches
-
-Consul KV can also be extended with the use of watches.
-[Watches](/consul/docs/dynamic-app-config/watches) are a way to monitor data for updates. When
-an update is detected, an external handler is invoked. To use watches with the
-KV store the [key](/consul/docs/dynamic-app-config/watches#key) watch type should be used.
-
-### Consul Sessions
-
-Consul sessions can be used to build distributed locks with Consul KV. Sessions
-act as a binding layer between nodes, health checks, and key/value data. The KV
-API supports an `acquire` and `release` operation. The `acquire` operation acts
-like a Check-And-Set operation. On success, there is a key update and an
-increment to the `LockIndex` and the session value is updated to reflect the
-session holding the lock. Review the session documentation for more information
-on the [integration](/consul/docs/dynamic-app-config/sessions#k-v-integration).
-
-Review the following tutorials to learn how to use Consul sessions for [application leader election](/consul/docs/dynamic-app-config/sessions/application-leader-election) and
-to [build distributed semaphores](/consul/tutorials/developer-configuration/distributed-semaphore).
-
-### Vault
-
-If you plan to use Consul KV as a backend for Vault, please review [this
-tutorial](/vault/tutorials/day-one-consul/ha-with-consul?utm_source=docs).
diff --git a/website/content/docs/dynamic-app-config/sessions/application-leader-election.mdx b/website/content/docs/dynamic-app-config/sessions/application-leader-election.mdx
deleted file mode 100644
index 5b14bcdc9e1..00000000000
--- a/website/content/docs/dynamic-app-config/sessions/application-leader-election.mdx
+++ /dev/null
@@ -1,396 +0,0 @@
----
-layout: docs
-page_title: Application leader election
-description: >-
- Learn how to perform client-side leader elections using sessions and Consul key/value (KV) store.
----
-
-# Application leader election
-
-This topic describes the process for building client-side leader elections for service instances using Consul's [session mechanism for building distributed locks](/consul/docs/dynamic-app-config/sessions) and the [Consul key/value store](/consul/docs/dynamic-app-config/kv), which is Consul's key/value datastore.
-
-This topic is not related to Consul's leader election. For more information about the Raft leader election used internally by Consul, refer to
-[consensus protocol](/consul/docs/architecture/consensus) documentation.
-
-## Background
-
-Some distributed applications, like HDFS or ActiveMQ, require setting up one instance as a leader to ensure application data is current and stable.
-
-Consul's support for [sessions](/consul/docs/dynamic-app-config/sessions) and [watches](/consul/docs/dynamic-app-config/watches) allows you to build a client-side leader election process where clients use a lock on a key in the KV datastore to ensure mutual exclusion and to gracefully handle failures.
-
-All service instances that are participating should coordinate on a key format. We recommend the following pattern:
-
-```plaintext
-service//leader
-```
-
-## Requirements
-
-- A running Consul server
-- A path in the Consul KV datastore to acquire locks and to store information about the leader. The instructions on this page use the following key: `service/leader`.
-- If ACLs are enabled, a token with the following permissions:
- - `session:write` permissions over the service session name
- - `key:write` permissions over the key
- - The `curl` command
-
-Expose the token using the `CONSUL_HTTP_TOKEN` environment variable.
-
-## Client-side leader election procedure
-
-The workflow for building a client-side leader election process has the following steps:
-
-- For each client trying to acquire the lock:
- 1. [Create a session](#create-a-new-session) associated with the client node.
- 1. [Acquire the lock](#acquire-the-lock) on the designated key in the KV store using the `acquire` parameter.
- 1. [Watch the KV key](#watch-the-kv-key-for-locks) to verify if the lock was released. If no lock is present, try to acquire a lock.
-
-- For the client that acquires the lock:
- 1. Periodically, [renew the session](#renew-a-session) to avoid expiration.
- 1. Optionally, [release the lock](#release-a-lock).
-
-- For other services:
- 1. [Watch the KV key](#watch-the-kv-key-for-locks) to verify there is at least one process holding the lock.
- 1. Use the values written under the KV path to identify the leader and update configurations accordingly.
-
-## Create a new session
-
-Create a configuration for the session.
-The minimum viable configuration requires that you specify the session name. The following example demonstrates this configuration.
-
-
-
-```json
-{
- "Name": "session_name"
-}
-```
-
-
-
-Create a session using the [`/session` Consul HTTP API](/consul/api-docs/session) endpoint. In the following example, the node's `hostname` is the session name.
-
-```shell-session
-$ curl --silent \
- --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
- --data '{"Name": "'`hostname`'"}' \
- --request PUT \
- http://127.0.0.1:8500/v1/session/create | jq
-```
-
-
-The command returns a JSON object containing the ID of the newly created session.
-
-```json
-{
- "ID": "d21d60ad-c2d2-b32a-7432-ca4048a4a7d6"
-}
-```
-
-### Verify session
-
-Use the `/v1/session/list` endpoint to retrieve existing sessions.
-
-```shell-session
-$ curl --silent \
- --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
- --request GET \
- http://127.0.0.1:8500/v1/session/list | jq
-```
-
-The command returns a JSON array containing all available sessions in the system.
-
-
-
-```json
-[
- {
- "ID": "d21d60ad-c2d2-b32a-7432-ca4048a4a7d6",
- "Name": "hashicups-db-0",
- "Node": "hashicups-db-0",
- "LockDelay": 15000000000,
- "Behavior": "release",
- "TTL": "",
- "NodeChecks": [
- "serfHealth"
- ],
- "ServiceChecks": null,
- "CreateIndex": 11956,
- "ModifyIndex": 11956
- }
-]
-```
-
-
-
-You can verify from the output that the session is associated with the `hashicups-db-0` node, which is the client agent where the API request was made.
-
-With the exception of the `Name`, all parameters are set to their default values. The session is created without a `TTL` value, which means that it never expires and requires you to delete it explicitly.
-
-Depending on your needs you can create sessions specifying more parameters such as:
-
-- `TTL` - If provided, the session is invalidated and deleted if it is not renewed before the TTL expires.
-- `ServiceChecks` - Specifies a list of service checks to monitor. The session is invalidated if the checks return a critical state.
-
-By setting these extra parameters, you can create a client-side leader election workflow that automatically releases the lock after a specified amount of time since the last renew, or that automatically releases locks when the service holding them fails.
-
-For a full list of parameters available refer to the [`/session/create` endpoint documentation](/consul/api-docs/session#create-session).
-
-## Acquire the lock
-
-Create the data object to associate to the lock request.
-
-The data of the request should be a JSON object representing the local instance. This value is opaque to Consul, but it should contain whatever information clients require to communicate with your application. For example, it could be a JSON object that contains the node's name and the application's port.
-
-
-
-```json
-{
- "Node": "node-name",
- "Port": "8080"
-}
-```
-
-
-
-
-
-
-Acquire a lock for a given key using the PUT method on a [KV entry](/consul/api-docs/kv) with the
-`?acquire=` query parameter.
-
-```shell-session
-$ curl --silent \
- --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
- --data '{"Node": "'`hostname`'"}' \
- --request PUT \
- http://localhost:8500/v1/kv/service/leader?acquire=d21d60ad-c2d2-b32a-7432-ca4048a4a7d6 | jq
-```
-
-This request returns either `true` or `false`. If `true`, the lock was acquired and
-the local service instance is now the leader. If `false`, a different node acquired
-the lock.
-
-
-
-
-
-```shell-session
-$ consul kv put -acquire -session=d21d60ad-c2d2-b32a-7432-ca4048a4a7d6 /service/leader '{"Node": "'`hostname`'"}'
-```
-
-In case of success, the command exits with exit code `0` and outputs the following message.
-
-
-
-```plaintext
-Success! Lock acquired on: service/leader
-```
-
-
-
-If the lock was already acquired by another node, the command exits with exit code `1` and outputs the following message.
-
-
-
-```plaintext
-Error! Did not acquire lock
-```
-
-
-
-
-
-
-This example used the node's `hostname` as the key data. This data can be used by the other services to create configuration files.
-
-Be aware that this locking system has no enforcement mechanism that requires clients to acquire a lock before they perform an operation. Any client can read, write, and delete a key without owning the corresponding lock.
-
-## Watch the KV key for locks
-
-Existing locks need to be monitored by all nodes involved in the client-side leader elections, as well as by the other nodes that need to know the identity of the leader.
-
- - Lock holders need to monitor the lock because the session might get invalidated by an operator.
- - Other services that want to acquire the lock need to monitor it to check if the lock is released so they can try acquire the lock.
- - Other nodes need to monitor the lock to see if the value of the key changed and update their configuration accordingly.
-
-
-
-
-Monitor the lock using the GET method on a [KV entry](/consul/api-docs/kv) with the blocking query enabled.
-
-First, verify the latest index for the current value.
-
-```shell-session
-$ curl --silent \
- --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
- --request GET \
- http://127.0.0.1:8500/v1/kv/service/leader?index=1 | jq
-```
-
-The command outputs the key data, including the `ModifyIndex` for the object.
-
-
-
-```json
-[
- {
- "LockIndex": 0,
- "Key": "service/leader",
- "Flags": 0,
- "Value": "eyJOb2RlIjogImhhc2hpY3Vwcy1kYi0wIn0=",
- "Session": "d21d60ad-c2d2-b32a-7432-ca4048a4a7d6",
- "CreateIndex": 12399,
- "ModifyIndex": 13061
- }
-]
-```
-
-
-
-Using the value of the `ModifyIndex`, run a [blocking query](/consul/api-docs/features/blocking) against the lock.
-
-```shell-session
-$ curl --silent \
- --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
- --request GET \
- http://127.0.0.1:8500/v1/kv/service/leader?index=13061 | jq
-```
-The command hangs until a change is made on the KV path and after that the path data prints on the console.
-
-
-
-```json
-[
- {
- "LockIndex": 0,
- "Key": "service/leader",
- "Flags": 0,
- "Value": "eyJOb2RlIjogImhhc2hpY3Vwcy1kYi0xIn0=",
- "Session": "d21d60ad-c2d2-b32a-7432-ca4048a4a7d6",
- "CreateIndex": 12399,
- "ModifyIndex": 13329
- }
-]
-```
-
-
-
-For automation purposes, add logic to the blocking query mechanism to trigger a command every time a change is returned.
-A better approach is to use the CLI command `consul watch`.
-
-
-
-
-Monitor the lock using the [`consul watch`](/consul/commands/watch) command.
-
-```shell-session
-$ consul watch -type=key -key=service/leader cat | jq
-```
-
-In this example, the command output prints to the shell. However, it is possible to pass more complex option to the command as well as a script that contains more complex logic to react to the lock data change.
-
-An example output for the command is:
-
-
-
-```json
-{
- "Key": "service/leader",
- "CreateIndex": 12399,
- "ModifyIndex": 13061,
- "LockIndex": 0,
- "Flags": 0,
- "Value": "eyJOb2RlIjogImhhc2hpY3Vwcy1kYi0wIn0=",
- "Session": "d21d60ad-c2d2-b32a-7432-ca4048a4a7d6"
-}
-```
-
-
-
-The `consul watch` command polls the KV path for changes and runs the specified command on the output when a change is made.
-
-
-
-
-From the output, notice that once the lock is acquired, the `Session` parameter contains the ID of the session that holds the lock.
-
-## Renew a session
-
-If a session is created with a `TTL` value set, you need to renew the session before the TTL expires.
-
-Use the [`/v1/session/renew`](/consul/api-docs/session#renew-session) endpoint to renew existing sessions.
-
-```shell-session
-$ curl --silent \
- --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
- --request PUT \
- http://127.0.0.1:8500/v1/session/renew/f027470f-2759-6b53-542d-066ae4185e67 | jq
-```
-
-If the command succeeds, the session information in JSON format is printed.
-
-
-
-```json
-[
- {
- "ID": "f027470f-2759-6b53-542d-066ae4185e67",
- "Name": "test",
- "Node": "consul-server-0",
- "LockDelay": 15000000000,
- "Behavior": "release",
- "TTL": "30s",
- "NodeChecks": [
- "serfHealth"
- ],
- "ServiceChecks": null,
- "CreateIndex": 11842,
- "ModifyIndex": 11842
- }
-]
-```
-
-
-
-## Release a lock
-
-A lock associated with a session with no `TTL` value set might never be released, even when the service holding it fails.
-
-In such cases, you need to manually release the lock.
-
-
-
-
-```shell-session
-$ curl --silent \
- --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
- --data '{"Node": "'`hostname`'"}' \
- --request PUT \
- http://localhost:8500/v1/kv/service/leader?release=d21d60ad-c2d2-b32a-7432-ca4048a4a7d6 | jq
-```
-
-The command prints `true` on success.
-
-
-
-
-```shell-session
-$ consul kv put -release -session=d21d60ad-c2d2-b32a-7432-ca4048a4a7d6 service/leader '{"Node": "'`hostname`'"}'
-```
-
-On success, the command outputs a success message.
-
-
-
-```plaintext
-Success! Lock released on: service/leader
-```
-
-
-
-
-
-
-After a lock is released, the key data do not show a value for `Session` in the results.
-Other clients can use this as a way to coordinate their lock requests.
-
diff --git a/website/content/docs/dynamic-app-config/sessions/index.mdx b/website/content/docs/dynamic-app-config/sessions/index.mdx
deleted file mode 100644
index 3eb26f558f5..00000000000
--- a/website/content/docs/dynamic-app-config/sessions/index.mdx
+++ /dev/null
@@ -1,144 +0,0 @@
----
-layout: docs
-page_title: Sessions and Distributed Locks Overview
-description: >-
- Consul supports sessions that you can use to build distributed locks with granular locking. Learn about sessions, how they can prevent ""split-brain"" systems by ensuring consistency in deployments, and how they can integrate with the key/value (KV) store.
----
-
-# Sessions and Distributed Locks Overview
-
-Consul provides a session mechanism which can be used to build distributed locks.
-Sessions act as a binding layer between nodes, health checks, and key/value data.
-They are designed to provide granular locking and are heavily inspired by
-[The Chubby Lock Service for Loosely-Coupled Distributed Systems](http://research.google.com/archive/chubby.html).
-
-## Session Design
-
-A session in Consul represents a contract that has very specific semantics.
-When a session is constructed, a node name, a list of health checks, a behavior,
-a TTL, and a `lock-delay` may be provided. The newly constructed session is provided with
-a named ID that can be used to identify it. This ID can be used with the KV
-store to acquire locks: advisory mechanisms for mutual exclusion.
-
-Below is a diagram showing the relationship between these components:
-
-
-
-The contract that Consul provides is that under any of the following
-situations, the session will be _invalidated_:
-
-- Node is deregistered
-- Any of the health checks are deregistered
-- Any of the health checks go to the critical state
-- Session is explicitly destroyed
-- TTL expires, if applicable
-
-When a session is invalidated, it is destroyed and can no longer
-be used. What happens to the associated locks depends on the
-behavior specified at creation time. Consul supports a `release`
-and `delete` behavior. The `release` behavior is the default
-if none is specified.
-
-If the `release` behavior is being used, any of the locks held in
-association with the session are released, and the `ModifyIndex` of
-the key is incremented. Alternatively, if the `delete` behavior is
-used, the key corresponding to any of the held locks is simply deleted.
-This can be used to create ephemeral entries that are automatically
-deleted by Consul.
-
-While this is a simple design, it enables a multitude of usage
-patterns. By default, the
-[gossip based failure detector](/consul/docs/architecture/gossip)
-is used as the associated health check. This failure detector allows
-Consul to detect when a node that is holding a lock has failed and
-to automatically release the lock. This ability provides **liveness** to
-Consul locks; that is, under failure the system can continue to make
-progress. However, because there is no perfect failure detector, it's possible
-to have a false positive (failure detected) which causes the lock to
-be released even though the lock owner is still alive. This means
-we are sacrificing some **safety**.
-
-Conversely, it is possible to create a session with no associated
-health checks. This removes the possibility of a false positive
-and trades liveness for safety. You can be absolutely certain Consul
-will not release the lock even if the existing owner has failed.
-Since Consul APIs allow a session to be force destroyed, this allows
-systems to be built that require an operator to intervene in the
-case of a failure while precluding the possibility of a split-brain.
-
-A third health checking mechanism is session TTLs. When creating
-a session, a TTL can be specified. If the TTL interval expires without
-being renewed, the session has expired and an invalidation is triggered.
-This type of failure detector is also known as a heartbeat failure detector.
-It is less scalable than the gossip based failure detector as it places
-an increased burden on the servers but may be applicable in some cases.
-The contract of a TTL is that it represents a lower bound for invalidation;
-that is, Consul will not expire the session before the TTL is reached, but it
-is allowed to delay the expiration past the TTL. The TTL is renewed on
-session creation, on session renew, and on leader failover. When a TTL
-is being used, clients should be aware of clock skew issues: namely,
-time may not progress at the same rate on the client as on the Consul servers.
-It is best to set conservative TTL values and to renew in advance of the TTL
-to account for network delay and time skew.
-
-The final nuance is that sessions may provide a `lock-delay`. This
-is a time duration, between 0 and 60 seconds. When a session invalidation
-takes place, Consul prevents any of the previously held locks from
-being re-acquired for the `lock-delay` interval; this is a safeguard
-inspired by Google's Chubby. The purpose of this delay is to allow
-the potentially still live leader to detect the invalidation and stop
-processing requests that may lead to inconsistent state. While not a
-bulletproof method, it does avoid the need to introduce sleep states
-into application logic and can help mitigate many issues. While the
-default is to use a 15 second delay, clients are able to disable this
-mechanism by providing a zero delay value.
-
-## K/V Integration
-
-Integration between the KV store and sessions is the primary
-place where sessions are used. A session must be created prior to use
-and is then referred to by its ID.
-
-The KV API is extended to support an `acquire` and `release` operation.
-The `acquire` operation acts like a Check-And-Set operation except it
-can only succeed if there is no existing lock holder (the current lock holder
-can re-`acquire`, see below). On success, there is a normal key update, but
-there is also an increment to the `LockIndex`, and the `Session` value is
-updated to reflect the session holding the lock.
-
-If the lock is already held by the given session during an `acquire`, then
-the `LockIndex` is not incremented but the key contents are updated. This
-lets the current lock holder update the key contents without having to give
-up the lock and reacquire it.
-
-Once held, the lock can be released using a corresponding `release` operation,
-providing the same session. Again, this acts like a Check-And-Set operation
-since the request will fail if given an invalid session. A critical note is
-that the lock can be released without being the creator of the session.
-This is by design as it allows operators to intervene and force-terminate
-a session if necessary. As mentioned above, a session invalidation will also
-cause all held locks to be released or deleted. When a lock is released, the `LockIndex`
-does not change; however, the `Session` is cleared and the `ModifyIndex` increments.
-
-These semantics (heavily borrowed from Chubby), allow the tuple of (Key, LockIndex, Session)
-to act as a unique "sequencer". This `sequencer` can be passed around and used
-to verify if the request belongs to the current lock holder. Because the `LockIndex`
-is incremented on each `acquire`, even if the same session re-acquires a lock,
-the `sequencer` will be able to detect a stale request. Similarly, if a session is
-invalided, the Session corresponding to the given `LockIndex` will be blank.
-
-To be clear, this locking system is purely _advisory_. There is no enforcement
-that clients must acquire a lock to perform any operation. Any client can
-read, write, and delete a key without owning the corresponding lock. It is not
-the goal of Consul to protect against misbehaving clients.
-
-## Leader Election
-
-You can use the primitives provided by sessions and the locking mechanisms of the KV
-store to build client-side leader election algorithms.
-These are covered in more detail in the [Leader Election guide](/consul/docs/dynamic-app-config/sessions/application-leader-election).
-
-## Prepared Query Integration
-
-Prepared queries may be attached to a session in order to automatically delete
-the prepared query when the session is invalidated.
diff --git a/website/content/docs/k8s/connect/cluster-peering/usage/establish-peering.mdx b/website/content/docs/east-west/cluster-peering/establish/k8s.mdx
similarity index 92%
rename from website/content/docs/k8s/connect/cluster-peering/usage/establish-peering.mdx
rename to website/content/docs/east-west/cluster-peering/establish/k8s.mdx
index bc82be872a1..2767451ca2c 100644
--- a/website/content/docs/k8s/connect/cluster-peering/usage/establish-peering.mdx
+++ b/website/content/docs/east-west/cluster-peering/establish/k8s.mdx
@@ -18,9 +18,9 @@ The overall process for establishing a cluster peering connection consists of th
Cluster peering between services cannot be established until all four steps are complete.
-Cluster peering between services cannot be established until all four steps are complete. If you want to establish cluster peering connections and create sameness groups at the same time, refer to the guidance in [create sameness groups](/consul/docs/k8s/connect/cluster-peering/usage/create-sameness-groups).
+Cluster peering between services cannot be established until all four steps are complete. If you want to establish cluster peering connections and create sameness groups at the same time, refer to the guidance in [create sameness groups](/consul/docs/east-west/sameness-group).
-For general guidance for establishing cluster peering connections, refer to [Establish cluster peering connections](/consul/docs/connect/cluster-peering/usage/establish-cluster-peering).
+For general guidance for establishing cluster peering connections, refer to [Establish cluster peering connections](/consul/docs/east-west/cluster-peering/establish-connection/vm).
## Prerequisites
@@ -194,7 +194,7 @@ Next, use the peering token to establish a secure connection between the cluster
After you establish a connection between the clusters, you need to create an `exported-services` CRD that defines the services that are available to another admin partition.
-While the CRD can target admin partitions either locally or remotely, clusters peering always exports services to remote admin partitions. Refer to [exported service consumers](/consul/docs/connect/config-entries/exported-services#consumers-1) for more information.
+While the CRD can target admin partitions either locally or remotely, clusters peering always exports services to remote admin partitions. Refer to [exported service consumers](/consul/docs/reference/config-entry/exported-services#consumers-1) for more information.
1. For the service in `cluster-02` that you want to export, add the `"consul.hashicorp.com/connect-inject": "true"` annotation to your service's pods prior to deploying. The annotation allows the workload to join the mesh. It is highlighted in the following example:
@@ -439,4 +439,19 @@ Before you can call services from peered clusters, you must set service intentio
}
```
-