Examples |
- Line of business (human capital management, customer relationship management, enterprise resource planning)
@@ -101,7 +101,7 @@ The following sections compare various data store models in terms of workload pr
## Document databases
-**Workload** |
+ Workload |
- General purpose.
@@ -113,7 +113,7 @@ The following sections compare various data store models in terms of workload pr
- Individual documents are retrieved and written as a single block.
|
-**Data type** |
+ Data type |
- Data can be managed in de-normalized way.
@@ -125,7 +125,7 @@ The following sections compare various data store models in terms of workload pr
|
-**Examples** |
+ Examples |
- Product catalog
@@ -145,7 +145,7 @@ The following sections compare various data store models in terms of workload pr
## Key/value stores
-**Workload** |
+ Workload |
- Data is identified and accessed using a single ID key, like a dictionary.
@@ -156,7 +156,7 @@ The following sections compare various data store models in terms of workload pr
|
-**Data type** |
+ Data type |
- Data size tends to be large.
@@ -166,7 +166,7 @@ The following sections compare various data store models in terms of workload pr
|
-**Examples** |
+ Examples |
- Data caching
@@ -182,7 +182,7 @@ The following sections compare various data store models in terms of workload pr
## Graph databases
-**Workload** |
+ Workload |
- The relationships between data items are very complex, involving many hops between related data items.
@@ -191,7 +191,7 @@ The following sections compare various data store models in terms of workload pr
|
-**Data type** |
+ Data type |
- Data is comprised of nodes and relationships.
@@ -201,7 +201,7 @@ The following sections compare various data store models in terms of workload pr
|
-**Examples** |
+ Examples |
- Organization charts
@@ -217,7 +217,7 @@ The following sections compare various data store models in terms of workload pr
## Column-family databases
-**Workload** |
+ Workload |
- Most column-family databases perform write operations extremely quickly.
@@ -228,7 +228,7 @@ The following sections compare various data store models in terms of workload pr
|
-**Data type** |
+ Data type |
- Data is stored in tables consisting of a key column and one or more column families.
@@ -238,7 +238,7 @@ The following sections compare various data store models in terms of workload pr
|
-**Examples** |
+ Examples |
- Recommendations
@@ -258,7 +258,7 @@ The following sections compare various data store models in terms of workload pr
## Search engine databases
-**Workload** |
+ Workload |
- Indexing data from multiple sources and services.
@@ -270,7 +270,7 @@ The following sections compare various data store models in terms of workload pr
|
-**Data type** |
+ Data type |
- Semi-structured or unstructured
@@ -279,7 +279,7 @@ The following sections compare various data store models in terms of workload pr
|
-**Examples** |
+ Examples |
- Product catalogs
@@ -295,7 +295,7 @@ The following sections compare various data store models in terms of workload pr
## Data warehouse
-**Workload** |
+ Workload |
- Data analytics
@@ -303,17 +303,17 @@ The following sections compare various data store models in terms of workload pr
|
-**Data type** |
+ Data type |
- Historical data from multiple sources.
- - Usually denormalized in a "star" or "snowflake" schema, consisting of fact and dimension tables.
+ - Usually denormalized in a "star" or "snowflake" schema, consisting of fact and dimension tables.
- Usually loaded with new data on a scheduled basis.
- - Dimension tables often include multiple historic versions of an entity, referred to as a *slowly changing dimension*.
+ - Dimension tables often include multiple historic versions of an entity, referred to as a slowly changing dimension.
|
-**Examples** |
+ Examples |
An enterprise data warehouse that provides data for analytical models, reports, and dashboards.
|
@@ -323,7 +323,7 @@ The following sections compare various data store models in terms of workload pr
## Time series databases
-**Workload** |
+ Workload |
- An overwhelmingly proportion of operations (95-99%) are writes.
@@ -331,12 +331,12 @@ The following sections compare various data store models in terms of workload pr
- Updates are rare.
- Deletes occur in bulk, and are made to contiguous blocks or records.
- Read requests can be larger than available memory.
- - It's common for multiple reads to occur simultaneously.
+ - It's common for multiple reads to occur simultaneously.
- Data is read sequentially in either ascending or descending time order.
|
-**Data type** |
+ Data type |
- A time stamp that is used as the primary key and sorting mechanism.
@@ -345,7 +345,7 @@ The following sections compare various data store models in terms of workload pr
|
-**Examples** |
+ Examples |
- Monitoring and event telemetry.
@@ -358,7 +358,7 @@ The following sections compare various data store models in terms of workload pr
## Object storage
-**Workload** |
+ Workload |
- Identified by key.
@@ -368,7 +368,7 @@ The following sections compare various data store models in terms of workload pr
|
-**Data type** |
+ Data type |
- Data size is large.
@@ -377,7 +377,7 @@ The following sections compare various data store models in terms of workload pr
|
-**Examples** |
+ Examples |
- Images, videos, office documents, PDFs
@@ -393,7 +393,7 @@ The following sections compare various data store models in terms of workload pr
## Shared files
-**Workload** |
+ Workload |
- Migration from existing apps that interact with the file system.
@@ -401,7 +401,7 @@ The following sections compare various data store models in terms of workload pr
|
-**Data type** |
+ Data type |
- Files in a hierarchical set of folders.
@@ -409,7 +409,7 @@ The following sections compare various data store models in terms of workload pr
|
-**Examples** |
+ Examples |
- Legacy files
diff --git a/docs/index.md b/docs/index.md
index c1b6a0678b2..5657cd0a723 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -318,7 +318,7 @@ ms.topic: landing-page
Azure Customer Advisory Team
- The AzureCAT team's blog
+ The AzureCAT team's blog
@@ -338,7 +338,7 @@ ms.topic: landing-page
SQL Server Customer Advisory Team
- The SQLCAT team's blog
+ The SQLCAT team's blog
diff --git a/docs/multitenant-identity/authorize.md b/docs/multitenant-identity/authorize.md
index 563226a275b..24895496897 100644
--- a/docs/multitenant-identity/authorize.md
+++ b/docs/multitenant-identity/authorize.md
@@ -98,7 +98,6 @@ In earlier versions of ASP.NET, you would set the **Roles** property on the attr
```csharp
// old way
[Authorize(Roles = "SurveyCreator")]
-
```
This is still supported in ASP.NET Core, but it has some drawbacks compared with authorization policies:
diff --git a/docs/multitenant-identity/run-the-app.md b/docs/multitenant-identity/run-the-app.md
index b7fe3b32a18..4ed9a3af1b4 100644
--- a/docs/multitenant-identity/run-the-app.md
+++ b/docs/multitenant-identity/run-the-app.md
@@ -48,15 +48,15 @@ To complete the end-to-end scenario, you'll need a second Azure AD directory to
3. Click **App registrations** > **New application registration**.
-4. In the **Create** blade, enter the following information:
+4. In the **Create** blade, enter the following information:
- - **Name**: `Surveys.WebAPI`
+ - **Name**: `Surveys.WebAPI`
- - **Application type**: `Web app / API`
+ - **Application type**: `Web app / API`
- - **Sign-on URL**: `https://localhost:44301/`
+ - **Sign-on URL**: `https://localhost:44301/`
- ![](./images/running-the-app/register-web-api.png)
+ ![](./images/running-the-app/register-web-api.png)
5. Click **Create**.
@@ -74,15 +74,15 @@ To complete the end-to-end scenario, you'll need a second Azure AD directory to
## Register the Surveys web app
-1. Navigate back to the **App registrations** blade, and click **New application registration**.
+1. Navigate back to the **App registrations** blade, and click **New application registration**.
-2. In the **Create** blade, enter the following information:
+2. In the **Create** blade, enter the following information:
- - **Name**: `Surveys`
- - **Application type**: `Web app / API`
- - **Sign-on URL**: `https://localhost:44300/`
+ - **Name**: `Surveys`
+ - **Application type**: `Web app / API`
+ - **Sign-on URL**: `https://localhost:44300/`
- Notice that the sign-on URL has a different port number from the `Surveys.WebAPI` app in the previous step.
+ Notice that the sign-on URL has a different port number from the `Surveys.WebAPI` app in the previous step.
3. Click **Create**.
@@ -146,36 +146,36 @@ To complete the end-to-end scenario, you'll need a second Azure AD directory to
![](./images/running-the-app/manifest.png)
-3. Add the following JSON to the `appRoles` element. Generate new GUIDs for the `id` properties.
-
- ```json
- {
- "allowedMemberTypes": ["User"],
- "description": "Creators can create surveys",
- "displayName": "SurveyCreator",
- "id": "",
- "isEnabled": true,
- "value": "SurveyCreator"
- },
- {
- "allowedMemberTypes": ["User"],
- "description": "Administrators can manage the surveys in their tenant",
- "displayName": "SurveyAdmin",
- "id": "",
- "isEnabled": true,
- "value": "SurveyAdmin"
- }
- ```
-
-5. In the `knownClientApplications` property, add the application ID for the Surveys web application, which you got when you registered the Surveys application earlier. For example:
-
- ```json
- "knownClientApplications": ["be2cea23-aa0e-4e98-8b21-2963d494912e"],
- ```
-
- This setting adds the Surveys app to the list of clients authorized to call the web API.
-
-6. Click **Save**.
+3. Add the following JSON to the `appRoles` element. Generate new GUIDs for the `id` properties.
+
+ ```json
+ {
+ "allowedMemberTypes": ["User"],
+ "description": "Creators can create surveys",
+ "displayName": "SurveyCreator",
+ "id": "",
+ "isEnabled": true,
+ "value": "SurveyCreator"
+ },
+ {
+ "allowedMemberTypes": ["User"],
+ "description": "Administrators can manage the surveys in their tenant",
+ "displayName": "SurveyAdmin",
+ "id": "",
+ "isEnabled": true,
+ "value": "SurveyAdmin"
+ }
+ ```
+
+4. In the `knownClientApplications` property, add the application ID for the Surveys web application, which you got when you registered the Surveys application earlier. For example:
+
+ ```json
+ "knownClientApplications": ["be2cea23-aa0e-4e98-8b21-2963d494912e"],
+ ```
+
+ This setting adds the Surveys app to the list of clients authorized to call the web API.
+
+5. Click **Save**.
Now repeat the same steps for the Surveys app, except do not add an entry for `knownClientApplications`. Use the same role definitions, but generate new GUIDs for the IDs.
diff --git a/docs/patterns/category/availability.md b/docs/patterns/category/availability.md
index 5ff18674de4..32d7250e6a3 100644
--- a/docs/patterns/category/availability.md
+++ b/docs/patterns/category/availability.md
@@ -14,8 +14,10 @@ pnp.series.title: Cloud Design Patterns
Availability defines the proportion of time that the system is functional and working. It will be affected by system errors, infrastructure problems, malicious attacks, and system load. It is usually measured as a percentage of uptime. Cloud applications typically provide users with a service level agreement (SLA), which means that applications must be designed and implemented in a way that maximizes availability.
-| Pattern | Summary |
-| ------- | ------- |
+
+| Pattern | Summary |
+|----------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------|
| [Health Endpoint Monitoring](../health-endpoint-monitoring.md) | Implement functional checks in an application that external tools can access through exposed endpoints at regular intervals. |
-| [Queue-Based Load Leveling](../queue-based-load-leveling.md) | Use a queue that acts as a buffer between a task and a service that it invokes in order to smooth intermittent heavy loads. |
-| [Throttling](../throttling.md) | Control the consumption of resources used by an instance of an application, an individual tenant, or an entire service. |
\ No newline at end of file
+| [Queue-Based Load Leveling](../queue-based-load-leveling.md) | Use a queue that acts as a buffer between a task and a service that it invokes in order to smooth intermittent heavy loads. |
+| [Throttling](../throttling.md) | Control the consumption of resources used by an instance of an application, an individual tenant, or an entire service. |
+
diff --git a/docs/patterns/category/data-management.md b/docs/patterns/category/data-management.md
index 87a3ed9ae0f..9a30f908e5d 100644
--- a/docs/patterns/category/data-management.md
+++ b/docs/patterns/category/data-management.md
@@ -14,13 +14,15 @@ pnp.series.title: Cloud Design Patterns
Data management is the key element of cloud applications, and influences most of the quality attributes. Data is typically hosted in different locations and across multiple servers for reasons such as performance, scalability or availability, and this can present a range of challenges. For example, data consistency must be maintained, and data will typically need to be synchronized across different locations.
-| Pattern | Summary |
-| ------- | ------- |
-| [Cache-Aside](../cache-aside.md) | Load data on demand into a cache from a data store |
-| [CQRS](../cqrs.md) | Segregate operations that read data from operations that update data by using separate interfaces. |
-| [Event Sourcing](../event-sourcing.md) | Use an append-only store to record the full series of events that describe actions taken on data in a domain. |
-| [Index Table](../index-table.md) | Create indexes over the fields in data stores that are frequently referenced by queries. |
-| [Materialized View](../materialized-view.md) | Generate prepopulated views over the data in one or more data stores when the data isn't ideally formatted for required query operations. |
-| [Sharding](../sharding.md) | Divide a data store into a set of horizontal partitions or shards. |
-| [Static Content Hosting](../static-content-hosting.md) | Deploy static content to a cloud-based storage service that can deliver them directly to the client. |
-| [Valet Key](../valet-key.md) | Use a token or key that provides clients with restricted direct access to a specific resource or service. |
\ No newline at end of file
+
+| Pattern | Summary |
+|--------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------|
+| [Cache-Aside](../cache-aside.md) | Load data on demand into a cache from a data store |
+| [CQRS](../cqrs.md) | Segregate operations that read data from operations that update data by using separate interfaces. |
+| [Event Sourcing](../event-sourcing.md) | Use an append-only store to record the full series of events that describe actions taken on data in a domain. |
+| [Index Table](../index-table.md) | Create indexes over the fields in data stores that are frequently referenced by queries. |
+| [Materialized View](../materialized-view.md) | Generate prepopulated views over the data in one or more data stores when the data isn't ideally formatted for required query operations. |
+| [Sharding](../sharding.md) | Divide a data store into a set of horizontal partitions or shards. |
+| [Static Content Hosting](../static-content-hosting.md) | Deploy static content to a cloud-based storage service that can deliver them directly to the client. |
+| [Valet Key](../valet-key.md) | Use a token or key that provides clients with restricted direct access to a specific resource or service. |
+
diff --git a/docs/patterns/category/design-implementation.md b/docs/patterns/category/design-implementation.md
index e7d58710681..68dd46aaeca 100644
--- a/docs/patterns/category/design-implementation.md
+++ b/docs/patterns/category/design-implementation.md
@@ -12,19 +12,21 @@ pnp.series.title: Cloud Design Patterns
Good design encompasses factors such as consistency and coherence in component design and deployment, maintainability to simplify administration and development, and reusability to allow components and subsystems to be used in other applications and in other scenarios. Decisions made during the design and implementation phase have a huge impact on the quality and the total cost of ownership of cloud hosted applications and services.
-| Pattern | Summary |
-| ------- | ------- |
-| [Ambassador](../ambassador.md) | Create helper services that send network requests on behalf of a consumer service or application. |
-| [Anti-Corruption Layer](../anti-corruption-layer.md) | Implement a façade or adapter layer between a modern application and a legacy system. |
-| [Backends for Frontends](../backends-for-frontends.md) | Create separate backend services to be consumed by specific frontend applications or interfaces. |
-| [CQRS](../cqrs.md) | Segregate operations that read data from operations that update data by using separate interfaces. |
-| [Compute Resource Consolidation](../compute-resource-consolidation.md) | Consolidate multiple tasks or operations into a single computational unit |
-| [External Configuration Store](../external-configuration-store.md) | Move configuration information out of the application deployment package to a centralized location. |
-| [Gateway Aggregation](../gateway-aggregation.md) | Use a gateway to aggregate multiple individual requests into a single request. |
-| [Gateway Offloading](../gateway-offloading.md) | Offload shared or specialized service functionality to a gateway proxy. |
-| [Gateway Routing](../gateway-routing.md) | Route requests to multiple services using a single endpoint. |
-| [Leader Election](../leader-election.md) | Coordinate the actions performed by a collection of collaborating task instances in a distributed application by electing one instance as the leader that assumes responsibility for managing the other instances. |
-| [Pipes and Filters](../pipes-and-filters.md) | Break down a task that performs complex processing into a series of separate elements that can be reused. |
-| [Sidecar](../sidecar.md) | Deploy components of an application into a separate process or container to provide isolation and encapsulation. |
-| [Static Content Hosting](../static-content-hosting.md) | Deploy static content to a cloud-based storage service that can deliver them directly to the client. |
-| [Strangler](../strangler.md) | Incrementally migrate a legacy system by gradually replacing specific pieces of functionality with new applications and services. |
\ No newline at end of file
+
+| Pattern | Summary |
+|------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| [Ambassador](../ambassador.md) | Create helper services that send network requests on behalf of a consumer service or application. |
+| [Anti-Corruption Layer](../anti-corruption-layer.md) | Implement a façade or adapter layer between a modern application and a legacy system. |
+| [Backends for Frontends](../backends-for-frontends.md) | Create separate backend services to be consumed by specific frontend applications or interfaces. |
+| [CQRS](../cqrs.md) | Segregate operations that read data from operations that update data by using separate interfaces. |
+| [Compute Resource Consolidation](../compute-resource-consolidation.md) | Consolidate multiple tasks or operations into a single computational unit |
+| [External Configuration Store](../external-configuration-store.md) | Move configuration information out of the application deployment package to a centralized location. |
+| [Gateway Aggregation](../gateway-aggregation.md) | Use a gateway to aggregate multiple individual requests into a single request. |
+| [Gateway Offloading](../gateway-offloading.md) | Offload shared or specialized service functionality to a gateway proxy. |
+| [Gateway Routing](../gateway-routing.md) | Route requests to multiple services using a single endpoint. |
+| [Leader Election](../leader-election.md) | Coordinate the actions performed by a collection of collaborating task instances in a distributed application by electing one instance as the leader that assumes responsibility for managing the other instances. |
+| [Pipes and Filters](../pipes-and-filters.md) | Break down a task that performs complex processing into a series of separate elements that can be reused. |
+| [Sidecar](../sidecar.md) | Deploy components of an application into a separate process or container to provide isolation and encapsulation. |
+| [Static Content Hosting](../static-content-hosting.md) | Deploy static content to a cloud-based storage service that can deliver them directly to the client. |
+| [Strangler](../strangler.md) | Incrementally migrate a legacy system by gradually replacing specific pieces of functionality with new applications and services. |
+
diff --git a/docs/patterns/category/management-monitoring.md b/docs/patterns/category/management-monitoring.md
index e0aefc02e9e..1a6f3844ef1 100644
--- a/docs/patterns/category/management-monitoring.md
+++ b/docs/patterns/category/management-monitoring.md
@@ -12,14 +12,16 @@ pnp.series.title: Cloud Design Patterns
Cloud applications run in in a remote datacenter where you do not have full control of the infrastructure or, in some cases, the operating system. This can make management and monitoring more difficult than an on-premises deployment. Applications must expose runtime information that administrators and operators can use to manage and monitor the system, as well as supporting changing business requirements and customization without requiring the application to be stopped or redeployed.
-| Pattern | Summary |
-| ------- | ------- |
-| [Ambassador](../ambassador.md) | Create helper services that send network requests on behalf of a consumer service or application. |
-| [Anti-Corruption Layer](../anti-corruption-layer.md) | Implement a façade or adapter layer between a modern application and a legacy system. |
-| [External Configuration Store](../external-configuration-store.md) | Move configuration information out of the application deployment package to a centralized location. |
-| [Gateway Aggregation](../gateway-aggregation.md) | Use a gateway to aggregate multiple individual requests into a single request. |
-| [Gateway Offloading](../gateway-offloading.md) | Offload shared or specialized service functionality to a gateway proxy. |
-| [Gateway Routing](../gateway-routing.md) | Route requests to multiple services using a single endpoint. |
-| [Health Endpoint Monitoring](../health-endpoint-monitoring.md) | Implement functional checks in an application that external tools can access through exposed endpoints at regular intervals. |
-| [Sidecar](../sidecar.md) | Deploy components of an application into a separate process or container to provide isolation and encapsulation. |
-| [Strangler](../strangler.md) | Incrementally migrate a legacy system by gradually replacing specific pieces of functionality with new applications and services. |
+
+| Pattern | Summary |
+|--------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------|
+| [Ambassador](../ambassador.md) | Create helper services that send network requests on behalf of a consumer service or application. |
+| [Anti-Corruption Layer](../anti-corruption-layer.md) | Implement a façade or adapter layer between a modern application and a legacy system. |
+| [External Configuration Store](../external-configuration-store.md) | Move configuration information out of the application deployment package to a centralized location. |
+| [Gateway Aggregation](../gateway-aggregation.md) | Use a gateway to aggregate multiple individual requests into a single request. |
+| [Gateway Offloading](../gateway-offloading.md) | Offload shared or specialized service functionality to a gateway proxy. |
+| [Gateway Routing](../gateway-routing.md) | Route requests to multiple services using a single endpoint. |
+| [Health Endpoint Monitoring](../health-endpoint-monitoring.md) | Implement functional checks in an application that external tools can access through exposed endpoints at regular intervals. |
+| [Sidecar](../sidecar.md) | Deploy components of an application into a separate process or container to provide isolation and encapsulation. |
+| [Strangler](../strangler.md) | Incrementally migrate a legacy system by gradually replacing specific pieces of functionality with new applications and services. |
+
diff --git a/docs/patterns/category/messaging.md b/docs/patterns/category/messaging.md
index 0166aec3a87..34e5dd4bcf4 100644
--- a/docs/patterns/category/messaging.md
+++ b/docs/patterns/category/messaging.md
@@ -14,10 +14,12 @@ pnp.series.title: Cloud Design Patterns
The distributed nature of cloud applications requires a messaging infrastructure that connects the components and services, ideally in a loosely coupled manner in order to maximize scalability. Asynchronous messaging is widely used, and provides many benefits, but also brings challenges such as the ordering of messages, poison message management, idempotency, and more.
-| Pattern | Summary |
-| ------- | ------- |
-| [Competing Consumers](../competing-consumers.md) | Enable multiple concurrent consumers to process messages received on the same messaging channel. |
-| [Pipes and Filters](../pipes-and-filters.md) | Break down a task that performs complex processing into a series of separate elements that can be reused. |
-| [Priority Queue](../priority-queue.md) | Prioritize requests sent to services so that requests with a higher priority are received and processed more quickly than those with a lower priority. |
-| [Queue-Based Load Leveling](../queue-based-load-leveling.md) | Use a queue that acts as a buffer between a task and a service that it invokes in order to smooth intermittent heavy loads. |
-| [Scheduler Agent Supervisor](../scheduler-agent-supervisor.md) | Coordinate a set of actions across a distributed set of services and other remote resources. |
\ No newline at end of file
+
+| Pattern | Summary |
+|----------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|
+| [Competing Consumers](../competing-consumers.md) | Enable multiple concurrent consumers to process messages received on the same messaging channel. |
+| [Pipes and Filters](../pipes-and-filters.md) | Break down a task that performs complex processing into a series of separate elements that can be reused. |
+| [Priority Queue](../priority-queue.md) | Prioritize requests sent to services so that requests with a higher priority are received and processed more quickly than those with a lower priority. |
+| [Queue-Based Load Leveling](../queue-based-load-leveling.md) | Use a queue that acts as a buffer between a task and a service that it invokes in order to smooth intermittent heavy loads. |
+| [Scheduler Agent Supervisor](../scheduler-agent-supervisor.md) | Coordinate a set of actions across a distributed set of services and other remote resources. |
+
diff --git a/docs/patterns/category/performance-scalability.md b/docs/patterns/category/performance-scalability.md
index b366ba7becf..95501851f6a 100644
--- a/docs/patterns/category/performance-scalability.md
+++ b/docs/patterns/category/performance-scalability.md
@@ -14,15 +14,17 @@ pnp.series.title: Cloud Design Patterns
Performance is an indication of the responsiveness of a system to execute any action within a given time interval, while scalability is ability of a system either to handle increases in load without impact on performance or for the available resources to be readily increased. Cloud applications typically encounter variable workloads and peaks in activity. Predicting these, especially in a multi-tenant scenario, is almost impossible. Instead, applications should be able to scale out within limits to meet peaks in demand, and scale in when demand decreases. Scalability concerns not just compute instances, but other elements such as data storage, messaging infrastructure, and more.
-| Pattern | Summary |
-| ------- | ------- |
-| [Cache-Aside](../cache-aside.md) | Load data on demand into a cache from a data store |
-| [CQRS](../cqrs.md) | Segregate operations that read data from operations that update data by using separate interfaces. |
-| [Event Sourcing](../event-sourcing.md) | Use an append-only store to record the full series of events that describe actions taken on data in a domain. |
-| [Index Table](../index-table.md) | Create indexes over the fields in data stores that are frequently referenced by queries. |
-| [Materialized View](../materialized-view.md) | Generate prepopulated views over the data in one or more data stores when the data isn't ideally formatted for required query operations. |
-| [Priority Queue](../priority-queue.md) | Prioritize requests sent to services so that requests with a higher priority are received and processed more quickly than those with a lower priority. |
-| [Queue-Based Load Leveling](../queue-based-load-leveling.md) | Use a queue that acts as a buffer between a task and a service that it invokes in order to smooth intermittent heavy loads. |
-| [Sharding](../sharding.md) | Divide a data store into a set of horizontal partitions or shards. |
-| [Static Content Hosting](../static-content-hosting.md) | Deploy static content to a cloud-based storage service that can deliver them directly to the client. |
-| [Throttling](../throttling.md) | Control the consumption of resources used by an instance of an application, an individual tenant, or an entire service. |
\ No newline at end of file
+
+| Pattern | Summary |
+|--------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|
+| [Cache-Aside](../cache-aside.md) | Load data on demand into a cache from a data store |
+| [CQRS](../cqrs.md) | Segregate operations that read data from operations that update data by using separate interfaces. |
+| [Event Sourcing](../event-sourcing.md) | Use an append-only store to record the full series of events that describe actions taken on data in a domain. |
+| [Index Table](../index-table.md) | Create indexes over the fields in data stores that are frequently referenced by queries. |
+| [Materialized View](../materialized-view.md) | Generate prepopulated views over the data in one or more data stores when the data isn't ideally formatted for required query operations. |
+| [Priority Queue](../priority-queue.md) | Prioritize requests sent to services so that requests with a higher priority are received and processed more quickly than those with a lower priority. |
+| [Queue-Based Load Leveling](../queue-based-load-leveling.md) | Use a queue that acts as a buffer between a task and a service that it invokes in order to smooth intermittent heavy loads. |
+| [Sharding](../sharding.md) | Divide a data store into a set of horizontal partitions or shards. |
+| [Static Content Hosting](../static-content-hosting.md) | Deploy static content to a cloud-based storage service that can deliver them directly to the client. |
+| [Throttling](../throttling.md) | Control the consumption of resources used by an instance of an application, an individual tenant, or an entire service. |
+
diff --git a/docs/patterns/category/resiliency.md b/docs/patterns/category/resiliency.md
index a91a6cf88c5..15d4309f32b 100644
--- a/docs/patterns/category/resiliency.md
+++ b/docs/patterns/category/resiliency.md
@@ -12,13 +12,15 @@ pnp.series.title: Cloud Design Patterns
Resiliency is the ability of a system to gracefully handle and recover from failures. The nature of cloud hosting, where applications are often multi-tenant, use shared platform services, compete for resources and bandwidth, communicate over the Internet, and run on commodity hardware means there is an increased likelihood that both transient and more permanent faults will arise. Detecting failures, and recovering quickly and efficiently, is necessary to maintain resiliency.
-| Pattern | Summary |
-| ------- | ------- |
-| [Bulkhead](../bulkhead.md) | Isolate elements of an application into pools so that if one fails, the others will continue to function. |
-| [Circuit Breaker](../circuit-breaker.md) | Handle faults that might take a variable amount of time to fix when connecting to a remote service or resource. |
-| [Compensating Transaction](../compensating-transaction.md) | Undo the work performed by a series of steps, which together define an eventually consistent operation. |
-| [Health Endpoint Monitoring](../health-endpoint-monitoring.md) | Implement functional checks in an application that external tools can access through exposed endpoints at regular intervals. |
-| [Leader Election](../leader-election.md) | Coordinate the actions performed by a collection of collaborating task instances in a distributed application by electing one instance as the leader that assumes responsibility for managing the other instances. |
-| [Queue-Based Load Leveling](../queue-based-load-leveling.md) | Use a queue that acts as a buffer between a task and a service that it invokes in order to smooth intermittent heavy loads. |
-| [Retry](../retry.md) | Enable an application to handle anticipated, temporary failures when it tries to connect to a service or network resource by transparently retrying an operation that's previously failed. |
-| [Scheduler Agent Supervisor](../scheduler-agent-supervisor.md) | Coordinate a set of actions across a distributed set of services and other remote resources. |
\ No newline at end of file
+
+| Pattern | Summary |
+|----------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| [Bulkhead](../bulkhead.md) | Isolate elements of an application into pools so that if one fails, the others will continue to function. |
+| [Circuit Breaker](../circuit-breaker.md) | Handle faults that might take a variable amount of time to fix when connecting to a remote service or resource. |
+| [Compensating Transaction](../compensating-transaction.md) | Undo the work performed by a series of steps, which together define an eventually consistent operation. |
+| [Health Endpoint Monitoring](../health-endpoint-monitoring.md) | Implement functional checks in an application that external tools can access through exposed endpoints at regular intervals. |
+| [Leader Election](../leader-election.md) | Coordinate the actions performed by a collection of collaborating task instances in a distributed application by electing one instance as the leader that assumes responsibility for managing the other instances. |
+| [Queue-Based Load Leveling](../queue-based-load-leveling.md) | Use a queue that acts as a buffer between a task and a service that it invokes in order to smooth intermittent heavy loads. |
+| [Retry](../retry.md) | Enable an application to handle anticipated, temporary failures when it tries to connect to a service or network resource by transparently retrying an operation that's previously failed. |
+| [Scheduler Agent Supervisor](../scheduler-agent-supervisor.md) | Coordinate a set of actions across a distributed set of services and other remote resources. |
+
diff --git a/docs/patterns/category/security.md b/docs/patterns/category/security.md
index 37f334c6d47..61e8521d202 100644
--- a/docs/patterns/category/security.md
+++ b/docs/patterns/category/security.md
@@ -14,8 +14,10 @@ pnp.series.title: Cloud Design Patterns
Security is the capability of a system to prevent malicious or accidental actions outside of the designed usage, and to prevent disclosure or loss of information. Cloud applications are exposed on the Internet outside trusted on-premises boundaries, are often open to the public, and may serve untrusted users. Applications must be designed and deployed in a way that protects them from malicious attacks, restricts access to only approved users, and protects sensitive data.
-| Pattern | Summary |
-| ------- | ------- |
-| [Federated Identity](../federated-identity.md) | Delegate authentication to an external identity provider. |
-| [Gatekeeper](../gatekeeper.md) | Protect applications and services by using a dedicated host instance that acts as a broker between clients and the application or service, validates and sanitizes requests, and passes requests and data between them. |
-| [Valet Key](../valet-key.md) | Use a token or key that provides clients with restricted direct access to a specific resource or service. |
\ No newline at end of file
+
+| Pattern | Summary |
+|------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| [Federated Identity](../federated-identity.md) | Delegate authentication to an external identity provider. |
+| [Gatekeeper](../gatekeeper.md) | Protect applications and services by using a dedicated host instance that acts as a broker between clients and the application or service, validates and sanitizes requests, and passes requests and data between them. |
+| [Valet Key](../valet-key.md) | Use a token or key that provides clients with restricted direct access to a specific resource or service. |
+
diff --git a/docs/patterns/health-endpoint-monitoring.md b/docs/patterns/health-endpoint-monitoring.md
index 564e795114e..6a5fa2d7d55 100644
--- a/docs/patterns/health-endpoint-monitoring.md
+++ b/docs/patterns/health-endpoint-monitoring.md
@@ -76,9 +76,9 @@ How to configure security for the monitoring endpoints to protect them from publ
- Secure the endpoint by requiring authentication. You can do this by using an authentication security key in the request header or by passing credentials with the request, provided that the monitoring service or tool supports authentication.
- - Use an obscure or hidden endpoint. For example, expose the endpoint on a different IP address to that used by the default application URL, configure the endpoint on a nonstandard HTTP port, and/or use a complex path to the test page. You can usually specify additional endpoint addresses and ports in the application configuration, and add entries for these endpoints to the DNS server if required to avoid having to specify the IP address directly.
+ - Use an obscure or hidden endpoint. For example, expose the endpoint on a different IP address to that used by the default application URL, configure the endpoint on a nonstandard HTTP port, and/or use a complex path to the test page. You can usually specify additional endpoint addresses and ports in the application configuration, and add entries for these endpoints to the DNS server if required to avoid having to specify the IP address directly.
- - Expose a method on an endpoint that accepts a parameter such as a key value or an operation mode value. Depending on the value supplied for this parameter, when a request is received the code can perform a specific test or set of tests, or return a 404 (Not Found) error if the parameter value isn't recognized. The recognized parameter values could be set in the application configuration.
+ - Expose a method on an endpoint that accepts a parameter such as a key value or an operation mode value. Depending on the value supplied for this parameter, when a request is received the code can perform a specific test or set of tests, or return a 404 (Not Found) error if the parameter value isn't recognized. The recognized parameter values could be set in the application configuration.
> DoS attacks are likely to have less impact on a separate endpoint that performs basic functional tests without compromising the operation of the application. Ideally, avoid using a test that might expose sensitive information. If you must return information that might be useful to an attacker, consider how you'll protect the endpoint and the data from unauthorized access. In this case just relying on obscurity isn't enough. You should also consider using an HTTPS connection and encrypting any sensitive data, although this will increase the load on the server.
diff --git a/docs/patterns/index.liquid.md b/docs/patterns/index.liquid.md
index 97cdd2f6e1b..06fa84404d1 100644
--- a/docs/patterns/index.liquid.md
+++ b/docs/patterns/index.liquid.md
@@ -16,7 +16,7 @@ Each pattern describes the problem that the pattern addresses, considerations fo
{%- for category in categories %}
-
- {% include 'pattern-category-card' %}
+ {% include 'pattern-category-card' %}
{%- endfor %}
@@ -24,7 +24,9 @@ Each pattern describes the problem that the pattern addresses, considerations fo
## Catalog of patterns
| Pattern | Summary |
-| ------- | ------- |
+|---------|---------|
+| | |
+
{%- for pattern in patterns %}
| [{{ pattern.title }}](./{{ pattern.file }}) | {{ pattern.description }} |
{%- endfor %}
\ No newline at end of file
diff --git a/docs/patterns/index.md b/docs/patterns/index.md
index 99c25bdb46c..5bb6654ed91 100644
--- a/docs/patterns/index.md
+++ b/docs/patterns/index.md
@@ -72,37 +72,38 @@ Each pattern describes the problem that the pattern addresses, considerations fo
## Catalog of patterns
-| Pattern | Summary |
-| ------- | ------- |
-| [Ambassador](./ambassador.md) | Create helper services that send network requests on behalf of a consumer service or application. |
-| [Anti-Corruption Layer](./anti-corruption-layer.md) | Implement a façade or adapter layer between a modern application and a legacy system. |
-| [Backends for Frontends](./backends-for-frontends.md) | Create separate backend services to be consumed by specific frontend applications or interfaces. |
-| [Bulkhead](./bulkhead.md) | Isolate elements of an application into pools so that if one fails, the others will continue to function. |
-| [Cache-Aside](./cache-aside.md) | Load data on demand into a cache from a data store |
-| [Circuit Breaker](./circuit-breaker.md) | Handle faults that might take a variable amount of time to fix when connecting to a remote service or resource. |
-| [CQRS](./cqrs.md) | Segregate operations that read data from operations that update data by using separate interfaces. |
-| [Compensating Transaction](./compensating-transaction.md) | Undo the work performed by a series of steps, which together define an eventually consistent operation. |
-| [Competing Consumers](./competing-consumers.md) | Enable multiple concurrent consumers to process messages received on the same messaging channel. |
-| [Compute Resource Consolidation](./compute-resource-consolidation.md) | Consolidate multiple tasks or operations into a single computational unit |
-| [Event Sourcing](./event-sourcing.md) | Use an append-only store to record the full series of events that describe actions taken on data in a domain. |
-| [External Configuration Store](./external-configuration-store.md) | Move configuration information out of the application deployment package to a centralized location. |
-| [Federated Identity](./federated-identity.md) | Delegate authentication to an external identity provider. |
-| [Gatekeeper](./gatekeeper.md) | Protect applications and services by using a dedicated host instance that acts as a broker between clients and the application or service, validates and sanitizes requests, and passes requests and data between them. |
-| [Gateway Aggregation](./gateway-aggregation.md) | Use a gateway to aggregate multiple individual requests into a single request. |
-| [Gateway Offloading](./gateway-offloading.md) | Offload shared or specialized service functionality to a gateway proxy. |
-| [Gateway Routing](./gateway-routing.md) | Route requests to multiple services using a single endpoint. |
-| [Health Endpoint Monitoring](./health-endpoint-monitoring.md) | Implement functional checks in an application that external tools can access through exposed endpoints at regular intervals. |
-| [Index Table](./index-table.md) | Create indexes over the fields in data stores that are frequently referenced by queries. |
-| [Leader Election](./leader-election.md) | Coordinate the actions performed by a collection of collaborating task instances in a distributed application by electing one instance as the leader that assumes responsibility for managing the other instances. |
-| [Materialized View](./materialized-view.md) | Generate prepopulated views over the data in one or more data stores when the data isn't ideally formatted for required query operations. |
-| [Pipes and Filters](./pipes-and-filters.md) | Break down a task that performs complex processing into a series of separate elements that can be reused. |
-| [Priority Queue](./priority-queue.md) | Prioritize requests sent to services so that requests with a higher priority are received and processed more quickly than those with a lower priority. |
-| [Queue-Based Load Leveling](./queue-based-load-leveling.md) | Use a queue that acts as a buffer between a task and a service that it invokes in order to smooth intermittent heavy loads. |
-| [Retry](./retry.md) | Enable an application to handle anticipated, temporary failures when it tries to connect to a service or network resource by transparently retrying an operation that's previously failed. |
-| [Scheduler Agent Supervisor](./scheduler-agent-supervisor.md) | Coordinate a set of actions across a distributed set of services and other remote resources. |
-| [Sharding](./sharding.md) | Divide a data store into a set of horizontal partitions or shards. |
-| [Sidecar](./sidecar.md) | Deploy components of an application into a separate process or container to provide isolation and encapsulation. |
-| [Static Content Hosting](./static-content-hosting.md) | Deploy static content to a cloud-based storage service that can deliver them directly to the client. |
-| [Strangler](./strangler.md) | Incrementally migrate a legacy system by gradually replacing specific pieces of functionality with new applications and services. |
-| [Throttling](./throttling.md) | Control the consumption of resources used by an instance of an application, an individual tenant, or an entire service. |
-| [Valet Key](./valet-key.md) | Use a token or key that provides clients with restricted direct access to a specific resource or service. |
\ No newline at end of file
+| Pattern | Summary |
+|-----------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| [Ambassador](./ambassador.md) | Create helper services that send network requests on behalf of a consumer service or application. |
+| [Anti-Corruption Layer](./anti-corruption-layer.md) | Implement a façade or adapter layer between a modern application and a legacy system. |
+| [Backends for Frontends](./backends-for-frontends.md) | Create separate backend services to be consumed by specific frontend applications or interfaces. |
+| [Bulkhead](./bulkhead.md) | Isolate elements of an application into pools so that if one fails, the others will continue to function. |
+| [Cache-Aside](./cache-aside.md) | Load data on demand into a cache from a data store |
+| [Circuit Breaker](./circuit-breaker.md) | Handle faults that might take a variable amount of time to fix when connecting to a remote service or resource. |
+| [CQRS](./cqrs.md) | Segregate operations that read data from operations that update data by using separate interfaces. |
+| [Compensating Transaction](./compensating-transaction.md) | Undo the work performed by a series of steps, which together define an eventually consistent operation. |
+| [Competing Consumers](./competing-consumers.md) | Enable multiple concurrent consumers to process messages received on the same messaging channel. |
+| [Compute Resource Consolidation](./compute-resource-consolidation.md) | Consolidate multiple tasks or operations into a single computational unit |
+| [Event Sourcing](./event-sourcing.md) | Use an append-only store to record the full series of events that describe actions taken on data in a domain. |
+| [External Configuration Store](./external-configuration-store.md) | Move configuration information out of the application deployment package to a centralized location. |
+| [Federated Identity](./federated-identity.md) | Delegate authentication to an external identity provider. |
+| [Gatekeeper](./gatekeeper.md) | Protect applications and services by using a dedicated host instance that acts as a broker between clients and the application or service, validates and sanitizes requests, and passes requests and data between them. |
+| [Gateway Aggregation](./gateway-aggregation.md) | Use a gateway to aggregate multiple individual requests into a single request. |
+| [Gateway Offloading](./gateway-offloading.md) | Offload shared or specialized service functionality to a gateway proxy. |
+| [Gateway Routing](./gateway-routing.md) | Route requests to multiple services using a single endpoint. |
+| [Health Endpoint Monitoring](./health-endpoint-monitoring.md) | Implement functional checks in an application that external tools can access through exposed endpoints at regular intervals. |
+| [Index Table](./index-table.md) | Create indexes over the fields in data stores that are frequently referenced by queries. |
+| [Leader Election](./leader-election.md) | Coordinate the actions performed by a collection of collaborating task instances in a distributed application by electing one instance as the leader that assumes responsibility for managing the other instances. |
+| [Materialized View](./materialized-view.md) | Generate prepopulated views over the data in one or more data stores when the data isn't ideally formatted for required query operations. |
+| [Pipes and Filters](./pipes-and-filters.md) | Break down a task that performs complex processing into a series of separate elements that can be reused. |
+| [Priority Queue](./priority-queue.md) | Prioritize requests sent to services so that requests with a higher priority are received and processed more quickly than those with a lower priority. |
+| [Queue-Based Load Leveling](./queue-based-load-leveling.md) | Use a queue that acts as a buffer between a task and a service that it invokes in order to smooth intermittent heavy loads. |
+| [Retry](./retry.md) | Enable an application to handle anticipated, temporary failures when it tries to connect to a service or network resource by transparently retrying an operation that's previously failed. |
+| [Scheduler Agent Supervisor](./scheduler-agent-supervisor.md) | Coordinate a set of actions across a distributed set of services and other remote resources. |
+| [Sharding](./sharding.md) | Divide a data store into a set of horizontal partitions or shards. |
+| [Sidecar](./sidecar.md) | Deploy components of an application into a separate process or container to provide isolation and encapsulation. |
+| [Static Content Hosting](./static-content-hosting.md) | Deploy static content to a cloud-based storage service that can deliver them directly to the client. |
+| [Strangler](./strangler.md) | Incrementally migrate a legacy system by gradually replacing specific pieces of functionality with new applications and services. |
+| [Throttling](./throttling.md) | Control the consumption of resources used by an instance of an application, an individual tenant, or an entire service. |
+| [Valet Key](./valet-key.md) | Use a token or key that provides clients with restricted direct access to a specific resource or service. |
+
diff --git a/docs/patterns/leader-election.md b/docs/patterns/leader-election.md
index 63e309e198d..bb257977ec4 100644
--- a/docs/patterns/leader-election.md
+++ b/docs/patterns/leader-election.md
@@ -66,9 +66,9 @@ This pattern might not be useful if:
The DistributedMutex project in the LeaderElection solution (a sample that demonstrates this pattern is available on [GitHub](https://github.com/mspnp/cloud-design-patterns/tree/master/leader-election)) shows how to use a lease on an Azure Storage blob to provide a mechanism for implementing a shared, distributed mutex. This mutex can be used to elect a leader among a group of role instances in an Azure cloud service. The first role instance to acquire the lease is elected the leader, and remains the leader until it releases the lease or isn't able to renew the lease. Other role instances can continue to monitor the blob lease in case the leader is no longer available.
> A blob lease is an exclusive write lock over a blob. A single blob can be the subject of only one lease at any point in time. A role instance can request a lease over a specified blob, and it'll be granted the lease if no other role instance holds a lease over the same blob. Otherwise the request will throw an exception.
-
+>
> To avoid a faulted role instance retaining the lease indefinitely, specify a lifetime for the lease. When this expires, the lease becomes available. However, while a role instance holds the lease it can request that the lease is renewed, and it'll be granted the lease for a further period of time. The role instance can continually repeat this process if it wants to retain the lease.
-For more information on how to lease a blob, see [Lease Blob (REST API)](https://msdn.microsoft.com/library/azure/ee691972.aspx).
+> For more information on how to lease a blob, see [Lease Blob (REST API)](https://msdn.microsoft.com/library/azure/ee691972.aspx).
The `BlobDistributedMutex` class in the C# example below contains the `RunTaskWhenMutexAquired` method that enables a role instance to attempt to acquire a lease over a specified blob. The details of the blob (the name, container, and storage account) are passed to the constructor in a `BlobSettings` object when the `BlobDistributedMutex` object is created (this object is a simple struct that is included in the sample code). The constructor also accepts a `Task` that references the code that the role instance should run if it successfully acquires the lease over the blob and is elected the leader. Note that the code that handles the low-level details of acquiring the lease is implemented in a separate helper class named `BlobLeaseManager`.
diff --git a/docs/patterns/pipes-and-filters.md b/docs/patterns/pipes-and-filters.md
index 066664a33c9..d204cc2304c 100644
--- a/docs/patterns/pipes-and-filters.md
+++ b/docs/patterns/pipes-and-filters.md
@@ -269,7 +269,7 @@ public class FinalReceiverRoleEntry : RoleEntryPoint
}
```
-##Related patterns and guidance
+## Related patterns and guidance
The following patterns and guidance might also be relevant when implementing this pattern:
- A sample that demonstrates this pattern is available on [GitHub](https://github.com/mspnp/cloud-design-patterns/tree/master/pipes-and-filters).
diff --git a/docs/reference-architectures/app-service-web-app/basic-web-app.md b/docs/reference-architectures/app-service-web-app/basic-web-app.md
index 74d0814138f..fa105973f8c 100644
--- a/docs/reference-architectures/app-service-web-app/basic-web-app.md
+++ b/docs/reference-architectures/app-service-web-app/basic-web-app.md
@@ -154,7 +154,7 @@ Tips for troubleshooting your application:
* Use the [troubleshoot blade][troubleshoot-blade] in the Azure portal to find solutions to common problems.
* Enable [log streaming][web-app-log-stream] to see logging information in near-real time.
-* The [Kudu dashboard][kudu] has several tools for monitoring and debugging your application. For more information, see [Azure Websites online tools you should know about][kudu] (blog post). You can reach the Kudu dashboard from the Azure portal. Open the blade for your app and click **Tools**, then click **Kudu**.
+* The [Kudu dashboard][kudu] has several tools for monitoring and debugging your application. For more information, see [Azure Websites online tools you should know about][kudu] (blog post). You can reach the Kudu dashboard from the Azure portal. Open the blade for your app and click Tools, then click Kudu.
* If you use Visual Studio, see the article [Troubleshoot a web app in Azure App Service using Visual Studio][troubleshoot-web-app] for debugging and troubleshooting tips.
## Security considerations
diff --git a/docs/reference-architectures/app-service-web-app/scalable-web-app.md b/docs/reference-architectures/app-service-web-app/scalable-web-app.md
index 74ff639b852..81499b21e4e 100644
--- a/docs/reference-architectures/app-service-web-app/scalable-web-app.md
+++ b/docs/reference-architectures/app-service-web-app/scalable-web-app.md
@@ -26,7 +26,7 @@ This architecture builds on the one shown in [Basic web application][basic-web-a
* **WebJob**. Use [Azure WebJobs][webjobs] to run long-running tasks in the background. WebJobs can run on a schedule, continously, or in response to a trigger, such as putting a message on a queue. A WebJob runs as a background process in the context of an App Service app.
* **Queue**. In the architecture shown here, the application queues background tasks by putting a message onto an [Azure Queue storage][queue-storage] queue. The message triggers a function in the WebJob. Alternatively, you can use Service Bus queues. For a comparison, see [Azure Queues and Service Bus queues - compared and contrasted][queues-compared].
* **Cache**. Store semi-static data in [Azure Redis Cache][azure-redis].
-* **CDN**. Use [Azure Content Delivery Network][azure-cdn] (CDN) to cache publicly available content for lower latency and faster delivery of content.
+* CDN. Use [Azure Content Delivery Network][azure-cdn] (CDN) to cache publicly available content for lower latency and faster delivery of content.
* **Data storage**. Use [Azure SQL Database][sql-db] for relational data. For non-relational data, consider a NoSQL store, such as [Cosmos DB][cosmosdb].
* **Azure Search**. Use [Azure Search][azure-search] to add search functionality such as search suggestions, fuzzy search, and language-specific search. Azure Search is typically used in conjunction with another data store, especially if the primary data store requires strict consistency. In this approach, store authoritative data in the other data store and the search index in Azure Search. Azure Search can also be used to consolidate a single search index from multiple data stores.
* **Email/SMS**. Use a third-party service such as SendGrid or Twilio to send email or SMS messages instead of building this functionality directly into the application.
diff --git a/docs/reference-architectures/dmz/nva-ha.md b/docs/reference-architectures/dmz/nva-ha.md
index b968767d900..bb06d46ac4a 100644
--- a/docs/reference-architectures/dmz/nva-ha.md
+++ b/docs/reference-architectures/dmz/nva-ha.md
@@ -12,7 +12,7 @@ cardTitle: Deploy highly available network virtual appliances
This article shows how to deploy a set of network virtual appliances (NVAs) for high availability in Azure. An NVA is typically used to control the flow of network traffic from a perimeter network, also known as a DMZ, to other networks or subnets. To learn about implementing a DMZ in Azure, see [Microsoft cloud services and network security][cloud-security]. The article includes example architectures for ingress only, egress only, and both ingress and egress.
-**Prerequisites:** This article assumes a basic understanding of Azure networking, [Azure load balancers][lb-overview], and [user-defined routes][udr-overview] (UDRs).
+Prerequisites: This article assumes a basic understanding of Azure networking, [Azure load balancers][lb-overview], and [user-defined routes][udr-overview] (UDRs).
## Architecture Diagrams
diff --git a/docs/reference-architectures/dmz/secure-vnet-hybrid.md b/docs/reference-architectures/dmz/secure-vnet-hybrid.md
index fa39cce439d..b5d38e583e7 100644
--- a/docs/reference-architectures/dmz/secure-vnet-hybrid.md
+++ b/docs/reference-architectures/dmz/secure-vnet-hybrid.md
@@ -176,7 +176,7 @@ A deployment for a reference architecture that implements these recommendations
* For more information about managing network security with Azure, see [Microsoft cloud services and network security][cloud-services-network-security].
* For detailed information about protecting resources in Azure, see [Getting started with Microsoft Azure security][getting-started-with-azure-security].
* For additional details on addressing security concerns across an Azure gateway connection, see [Implementing a hybrid network architecture with Azure and on-premises VPN][guidance-vpn-gateway-security] and [Implementing a hybrid network architecture with Azure ExpressRoute][guidance-expressroute-security].
->
+ >
diff --git a/docs/reference-architectures/hybrid-networking/hub-spoke.md b/docs/reference-architectures/hybrid-networking/hub-spoke.md
index dee836bc7f4..bb2d64c5ec6 100644
--- a/docs/reference-architectures/hybrid-networking/hub-spoke.md
+++ b/docs/reference-architectures/hybrid-networking/hub-spoke.md
@@ -120,9 +120,9 @@ Before you can deploy the reference architecture to your own subscription, you m
4. From a command prompt, bash prompt, or PowerShell prompt, login to your Azure account by using the command below, and follow the prompts.
- ```bash
- az login
- ```
+ ```bash
+ az login
+ ```
### Deploy the simulated on-premises datacenter using azbb
@@ -132,20 +132,20 @@ To deploy the simulated on-premises datacenter as an Azure VNet, follow these st
2. Open the `onprem.json` file and enter a username and password between the quotes in line 36 and 37, as shown below, then save the file.
- ```bash
- "adminUsername": "XXX",
- "adminPassword": "YYY",
- ```
+ ```bash
+ "adminUsername": "XXX",
+ "adminPassword": "YYY",
+ ```
3. On line 38, for `osType`, type `Windows` or `Linux` to install either Windows Server 2016 Datacenter, or Ubuntu 16.04 as the operating system for the jumpbox.
4. Run `azbb` to deploy the simulated onprem environment as shown below.
- ```bash
- azbb -s -g onprem-vnet-rg - l -p onoprem.json --deploy
- ```
- > [!NOTE]
- > If you decide to use a different resource group name (other than `onprem-vnet-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
+ ```bash
+ azbb -s -g onprem-vnet-rg - l -p onoprem.json --deploy
+ ```
+ > [!NOTE]
+ > If you decide to use a different resource group name (other than `onprem-vnet-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
5. Wait for the deployment to finish. This deployment creates a virtual network, a virtual machine, and a VPN gateway. The VPN gateway creation can take more than 40 minutes to complete.
@@ -155,26 +155,26 @@ To deploy the hub VNet, and connect to the simulated on-premises VNet created ab
1. Open the `hub-vnet.json` file and enter a username and password between the quotes in line 39 and 40, as shown below.
- ```bash
- "adminUsername": "XXX",
- "adminPassword": "YYY",
- ```
+ ```bash
+ "adminUsername": "XXX",
+ "adminPassword": "YYY",
+ ```
2. On line 41, for `osType`, type `Windows` or `Linux` to install either Windows Server 2016 Datacenter, or Ubuntu 16.04 as the operating system for the jumpbox.
3. Enter a shared key between the quotes in line 72, as shown below, then save the file.
- ```bash
- "sharedKey": "",
- ```
+ ```bash
+ "sharedKey": "",
+ ```
4. Run `azbb` to deploy the simulated onprem environment as shown below.
- ```bash
- azbb -s -g hub-vnet-rg - l -p hub-vnet.json --deploy
- ```
- > [!NOTE]
- > If you decide to use a different resource group name (other than `hub-vnet-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
+ ```bash
+ azbb -s -g hub-vnet-rg - l -p hub-vnet.json --deploy
+ ```
+ > [!NOTE]
+ > If you decide to use a different resource group name (other than `hub-vnet-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
5. Wait for the deployment to finish. This deployment creates a virtual network, a virtual machine, a VPN gateway, and a connection to the gateway created in the previous section. The VPN gateway creation can take more than 40 minutes to complete.
@@ -184,15 +184,15 @@ To test conectivity from the simulated on-premises environment to the hub VNet u
1. From the Azure portal, navigate to the `onprem-jb-rg` resource group, then click on the `jb-vm1` virtual machine resource.
-2. On the top left hand corner of your VM blade in the portal, click `Connect`, and follow the prompts to use remote desktop to connect to the VM. Make sure to use the username and password you specified in lines 36 and 37 in the `onprem.json` file.
+2. On the top left hand corner of your VM blade in the portal, click `Connect`, and follow the prompts to use remote desktop to connect to the VM. Make sure to use the username and password you specified in lines 36 and 37 in the `onprem.json` file.
3. Open a PowerShell console in the VM, and use the `Test-NetConnection` cmdlet to verify that you can connect to the hub jumpbox VM as shown below.
- ```powershell
- Test-NetConnection 10.0.0.68 -CommonTCPPort RDP
- ```
- > [!NOTE]
- > By default, Windows Server VMs do not allow ICMP responses in Azure. If you want to use `ping` to test connectivity, you need to enable ICMP traffic in the Windows Advanced Firewall for each VM.
+ ```powershell
+ Test-NetConnection 10.0.0.68 -CommonTCPPort RDP
+ ```
+ > [!NOTE]
+ > By default, Windows Server VMs do not allow ICMP responses in Azure. If you want to use `ping` to test connectivity, you need to enable ICMP traffic in the Windows Advanced Firewall for each VM.
To test conectivity from the simulated on-premises environment to the hub VNet using Linux VMs, perform the following steps:
@@ -202,17 +202,17 @@ To test conectivity from the simulated on-premises environment to the hub VNet u
3. From a Linux prompt, run `ssh` to connect to the simulated on-premises environment jumpbox witht the information you copied in step 2 above, as shown below.
- ```bash
- ssh @
- ```
+ ```bash
+ ssh @
+ ```
4. Use the password you specified in line 37 in the `onprem.json` file to the connect to the VM.
5. Use the `ping` command to test connectivity to the hub jumpbox, as shown below.
- ```bash
- ping 10.0.0.68
- ```
+ ```bash
+ ping 10.0.0.68
+ ```
### Azure spoke VNets
@@ -220,31 +220,31 @@ To deploy the spoke VNets, perform the following steps.
1. Open the `spoke1.json` file and enter a username and password between the quotes in lines 47 and 48, as shown below, then save the file.
- ```bash
- "adminUsername": "XXX",
- "adminPassword": "YYY",
- ```
+ ```bash
+ "adminUsername": "XXX",
+ "adminPassword": "YYY",
+ ```
2. On line 49, for `osType`, type `Windows` or `Linux` to install either Windows Server 2016 Datacenter, or Ubuntu 16.04 as the operating system for the jumpbox.
3. Run `azbb` to deploy the first spoke VNet environment as shown below.
- ```bash
- azbb -s -g spoke1-vnet-rg - l -p spoke1.json --deploy
- ```
+ ```bash
+ azbb -s -g spoke1-vnet-rg - l -p spoke1.json --deploy
+ ```
- > [!NOTE]
- > If you decide to use a different resource group name (other than `spoke1-vnet-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
+ > [!NOTE]
+ > If you decide to use a different resource group name (other than `spoke1-vnet-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
-3. Repeat step 1 above for file `spoke2.json`.
+4. Repeat step 1 above for file `spoke2.json`.
-4. Run `azbb` to deploy the second spoke VNet environment as shown below.
+5. Run `azbb` to deploy the second spoke VNet environment as shown below.
- ```bash
- azbb -s -g spoke2-vnet-rg - l -p spoke2.json --deploy
- ```
- > [!NOTE]
- > If you decide to use a different resource group name (other than `spoke2-vnet-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
+ ```bash
+ azbb -s -g spoke2-vnet-rg - l -p spoke2.json --deploy
+ ```
+ > [!NOTE]
+ > If you decide to use a different resource group name (other than `spoke2-vnet-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
### Azure hub VNet peering to spoke VNets
@@ -254,12 +254,12 @@ To create a peering connection from the hub VNet to the spoke VNets, perform the
2. Run `azbb` to deploy the first spoke VNet environment as shown below.
- ```bash
- azbb -s -g hub-vnet-rg - l -p hub-vnet-peering.json --deploy
- ```
+ ```bash
+ azbb -s -g hub-vnet-rg - l -p hub-vnet-peering.json --deploy
+ ```
- > [!NOTE]
- > If you decide to use a different resource group name (other than `hub-vnet-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
+ > [!NOTE]
+ > If you decide to use a different resource group name (other than `hub-vnet-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
### Test connectivity
@@ -267,14 +267,14 @@ To test conectivity from the simulated on-premises environment to the spoke VNet
1. From the Azure portal, navigate to the `onprem-jb-rg` resource group, then click on the `jb-vm1` virtual machine resource.
-2. On the top left hand corner of your VM blade in the portal, click `Connect`, and follow the prompts to use remote desktop to connect to the VM. Make sure to use the username and password you specified in lines 36 and 37 in the `onprem.json` file.
+2. On the top left hand corner of your VM blade in the portal, click `Connect`, and follow the prompts to use remote desktop to connect to the VM. Make sure to use the username and password you specified in lines 36 and 37 in the `onprem.json` file.
3. Open a PowerShell console in the VM, and use the `Test-NetConnection` cmdlet to verify that you can connect to the hub jumpbox VM as shown below.
- ```powershell
- Test-NetConnection 10.1.0.68 -CommonTCPPort RDP
- Test-NetConnection 10.2.0.68 -CommonTCPPort RDP
- ```
+ ```powershell
+ Test-NetConnection 10.1.0.68 -CommonTCPPort RDP
+ Test-NetConnection 10.2.0.68 -CommonTCPPort RDP
+ ```
To test conectivity from the simulated on-premises environment to the spoke VNets using Linux VMs, perform the following steps:
@@ -284,18 +284,18 @@ To test conectivity from the simulated on-premises environment to the spoke VNet
3. From a Linux prompt, run `ssh` to connect to the simulated on-premises environment jumpbox witht the information you copied in step 2 above, as shown below.
- ```bash
- ssh @
- ```
+ ```bash
+ ssh @
+ ```
-5. Use the password you specified in line 37 in the `onprem.json` file to the connect to the VM.
+4. Use the password you specified in line 37 in the `onprem.json` file to the connect to the VM.
-6. Use the `ping` command to test connectivity to the jumpbox VMs in each spoke, as shown below.
+5. Use the `ping` command to test connectivity to the jumpbox VMs in each spoke, as shown below.
- ```bash
- ping 10.1.0.68
- ping 10.2.0.68
- ```
+ ```bash
+ ping 10.1.0.68
+ ping 10.2.0.68
+ ```
### Add connectivity between spokes
@@ -303,17 +303,17 @@ If you want to allow spokes to connect to each other, you need to use a newtwork
1. Open the `hub-nva.json` file and enter a username and password between the quotes in lines 13 and 14, as shown below, then save the file.
- ```bash
- "adminUsername": "XXX",
- "adminPassword": "YYY",
- ```
+ ```bash
+ "adminUsername": "XXX",
+ "adminPassword": "YYY",
+ ```
2. Run `azbb` to deploy the NVA VM and user defined routes.
- ```bash
- azbb -s -g hub-nva-rg - l -p hub-nva.json --deploy
- ```
- > [!NOTE]
- > If you decide to use a different resource group name (other than `hub-nva-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
+ ```bash
+ azbb -s -g hub-nva-rg - l -p hub-nva.json --deploy
+ ```
+ > [!NOTE]
+ > If you decide to use a different resource group name (other than `hub-nva-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
diff --git a/docs/reference-architectures/hybrid-networking/shared-services.md b/docs/reference-architectures/hybrid-networking/shared-services.md
index 908ec05ad4b..1ded26e727f 100644
--- a/docs/reference-architectures/hybrid-networking/shared-services.md
+++ b/docs/reference-architectures/hybrid-networking/shared-services.md
@@ -103,9 +103,9 @@ Before you can deploy the reference architecture to your own subscription, you m
4. From a command prompt, bash prompt, or PowerShell prompt, login to your Azure account by using the command below, and follow the prompts.
- ```bash
- az login
- ```
+ ```bash
+ az login
+ ```
### Deploy the simulated on-premises datacenter using azbb
@@ -115,18 +115,18 @@ To deploy the simulated on-premises datacenter as an Azure VNet, follow these st
2. Open the `onprem.json` file and enter a username and password between the quotes in line 45 and 46, as shown below, then save the file.
- ```bash
- "adminUsername": "XXX",
- "adminPassword": "YYY",
- ```
+ ```bash
+ "adminUsername": "XXX",
+ "adminPassword": "YYY",
+ ```
3. Run `azbb` to deploy the simulated onprem environment as shown below.
- ```bash
- azbb -s -g onprem-vnet-rg - l -p onoprem.json --deploy
- ```
- > [!NOTE]
- > If you decide to use a different resource group name (other than `onprem-vnet-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
+ ```bash
+ azbb -s -g onprem-vnet-rg - l -p onoprem.json --deploy
+ ```
+ > [!NOTE]
+ > If you decide to use a different resource group name (other than `onprem-vnet-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
4. Wait for the deployment to finish. This deployment creates a virtual network, a virtual machine running Windows, and a VPN gateway. The VPN gateway creation can take more than 40 minutes to complete.
@@ -136,26 +136,26 @@ To deploy the hub VNet, and connect to the simulated on-premises VNet created ab
1. Open the `hub-vnet.json` file and enter a username and password between the quotes in line 50 and 51, as shown below.
- ```bash
- "adminUsername": "XXX",
- "adminPassword": "YYY",
- ```
+ ```bash
+ "adminUsername": "XXX",
+ "adminPassword": "YYY",
+ ```
2. On line 52, for `osType`, type `Windows` or `Linux` to install either Windows Server 2016 Datacenter, or Ubuntu 16.04 as the operating system for the jumpbox.
3. Enter a shared key between the quotes in line 83, as shown below, then save the file.
- ```bash
- "sharedKey": "",
- ```
+ ```bash
+ "sharedKey": "",
+ ```
4. Run `azbb` to deploy the simulated onprem environment as shown below.
- ```bash
- azbb -s -g hub-vnet-rg - l -p hub-vnet.json --deploy
- ```
- > [!NOTE]
- > If you decide to use a different resource group name (other than `hub-vnet-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
+ ```bash
+ azbb -s -g hub-vnet-rg - l -p hub-vnet.json --deploy
+ ```
+ > [!NOTE]
+ > If you decide to use a different resource group name (other than `hub-vnet-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
5. Wait for the deployment to finish. This deployment creates a virtual network, a virtual machine, a VPN gateway, and a connection to the gateway created in the previous section. The VPN gateway creation can take more than 40 minutes to complete.
@@ -165,22 +165,22 @@ To deploy the ADDS domain controllers in Azure, perform the following steps.
1. Open the `hub-adds.json` file and enter a username and password between the quotes in lines 14 and 15, as shown below, then save the file.
- ```bash
- "adminUsername": "XXX",
- "adminPassword": "YYY",
- ```
+ ```bash
+ "adminUsername": "XXX",
+ "adminPassword": "YYY",
+ ```
2. Run `azbb` to deploy the ADDS domain controllers as shown below.
- ```bash
- azbb -s -g hub-adds-rg - l -p hub-adds.json --deploy
- ```
+ ```bash
+ azbb -s -g hub-adds-rg - l -p hub-adds.json --deploy
+ ```
- > [!NOTE]
- > If you decide to use a different resource group name (other than `hub-adds-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
+ > [!NOTE]
+ > If you decide to use a different resource group name (other than `hub-adds-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
- > [!NOTE]
- > This part of the deployment may take several minutes, since it requires joining the two VMs to the domain hosted int he simulated on-premises datacenter, then installing AD DS on them.
+ > [!NOTE]
+ > This part of the deployment may take several minutes, since it requires joining the two VMs to the domain hosted int he simulated on-premises datacenter, then installing AD DS on them.
### NVA
@@ -188,17 +188,17 @@ To deploy an NVA in the `dmz` subnet, perform the following steps:
1. Open the `hub-nva.json` file and enter a username and password between the quotes in lines 13 and 14, as shown below, then save the file.
- ```bash
- "adminUsername": "XXX",
- "adminPassword": "YYY",
- ```
+ ```bash
+ "adminUsername": "XXX",
+ "adminPassword": "YYY",
+ ```
2. Run `azbb` to deploy the NVA VM and user defined routes.
- ```bash
- azbb -s -g hub-nva-rg - l -p hub-nva.json --deploy
- ```
- > [!NOTE]
- > If you decide to use a different resource group name (other than `hub-nva-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
+ ```bash
+ azbb -s -g hub-nva-rg - l -p hub-nva.json --deploy
+ ```
+ > [!NOTE]
+ > If you decide to use a different resource group name (other than `hub-nva-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
### Azure spoke VNets
@@ -206,31 +206,31 @@ To deploy the spoke VNets, perform the following steps.
1. Open the `spoke1.json` file and enter a username and password between the quotes in lines 52 and 53, as shown below, then save the file.
- ```bash
- "adminUsername": "XXX",
- "adminPassword": "YYY",
- ```
+ ```bash
+ "adminUsername": "XXX",
+ "adminPassword": "YYY",
+ ```
2. On line 54, for `osType`, type `Windows` or `Linux` to install either Windows Server 2016 Datacenter, or Ubuntu 16.04 as the operating system for the jumpbox.
3. Run `azbb` to deploy the first spoke VNet environment as shown below.
- ```bash
- azbb -s -g spoke1-vnet-rg - l -p spoke1.json --deploy
- ```
+ ```bash
+ azbb -s -g spoke1-vnet-rg - l -p spoke1.json --deploy
+ ```
- > [!NOTE]
- > If you decide to use a different resource group name (other than `spoke1-vnet-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
+ > [!NOTE]
+ > If you decide to use a different resource group name (other than `spoke1-vnet-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
-3. Repeat step 1 above for file `spoke2.json`.
+4. Repeat step 1 above for file `spoke2.json`.
-4. Run `azbb` to deploy the second spoke VNet environment as shown below.
+5. Run `azbb` to deploy the second spoke VNet environment as shown below.
- ```bash
- azbb -s -g spoke2-vnet-rg - l -p spoke2.json --deploy
- ```
- > [!NOTE]
- > If you decide to use a different resource group name (other than `spoke2-vnet-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
+ ```bash
+ azbb -s -g spoke2-vnet-rg - l -p spoke2.json --deploy
+ ```
+ > [!NOTE]
+ > If you decide to use a different resource group name (other than `spoke2-vnet-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
### Azure hub VNet peering to spoke VNets
@@ -240,12 +240,12 @@ To create a peering connection from the hub VNet to the spoke VNets, perform the
2. Run `azbb` to deploy the first spoke VNet environment as shown below.
- ```bash
- azbb -s -g hub-vnet-rg - l -p hub-vnet-peering.json --deploy
- ```
+ ```bash
+ azbb -s -g hub-vnet-rg - l -p hub-vnet-peering.json --deploy
+ ```
- > [!NOTE]
- > If you decide to use a different resource group name (other than `hub-vnet-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
+ > [!NOTE]
+ > If you decide to use a different resource group name (other than `hub-vnet-rg`), make sure to search for all parameter files that use that name and edit them to use your own resource group name.
diff --git a/docs/reference-architectures/identity/adds-forest.md b/docs/reference-architectures/identity/adds-forest.md
index f799ddbbd94..7520dcfefe9 100644
--- a/docs/reference-architectures/identity/adds-forest.md
+++ b/docs/reference-architectures/identity/adds-forest.md
@@ -116,8 +116,8 @@ A solution is available on [GitHub][github] to deploy this reference architectur
5. If you are using the simulated on-premises configuration, configure the incoming trust relationship:
- 1. Connect to the jump box (*ra-adtrust-mgmt-vm1* in the *ra-adtrust-security-rg* resource group). Log in as *testuser* with password *AweS0me@PW*.
- 2. On the jump box open an RDP session on the first VM in the *contoso.com* domain (the on-premises domain). This VM has the IP address 192.168.0.4. The username is *contoso\testuser* with password *AweS0me@PW*.
+ 1. Connect to the jump box (ra-adtrust-mgmt-vm1 in the ra-adtrust-security-rg resource group). Log in as testuser with password AweS0me@PW.
+ 2. On the jump box open an RDP session on the first VM in the contoso.com domain (the on-premises domain). This VM has the IP address 192.168.0.4. The username is contoso\testuser with password AweS0me@PW.
3. Download the [incoming-trust.ps1][incoming-trust] script and run it to create the incoming trust from the *treyresearch.com* domain.
6. If you are using your own on-premises infrastructure:
@@ -126,7 +126,7 @@ A solution is available on [GitHub][github] to deploy this reference architectur
2. Edit the script and replace the value of the `$TrustedDomainName` variable with the name of your own domain.
3. Run the script.
-7. From the jump-box, connect to the first VM in the *treyresearch.com* domain (the domain in the cloud). This VM has the IP address 10.0.4.4. The username is *treyresearch\testuser* with password *AweS0me@PW*.
+7. From the jump-box, connect to the first VM in the treyresearch.com domain (the domain in the cloud). This VM has the IP address 10.0.4.4. The username is treyresearch\testuser with password AweS0me@PW.
8. Download the [outgoing-trust.ps1][outgoing-trust] script and run it to create the incoming trust from the *treyresearch.com* domain. If you are using your own on-premises machines, then edit the script first. Set the `$TrustedDomainName` variable to the name of your on-premises domain, and specify the IP addresses of the Active Directory DS servers for this domain in the `$TrustedDomainDnsIpAddresses` variable.
diff --git a/docs/reference-architectures/identity/adfs.md b/docs/reference-architectures/identity/adfs.md
index c9cd9a9e8c2..0f26a061b2b 100644
--- a/docs/reference-architectures/identity/adfs.md
+++ b/docs/reference-architectures/identity/adfs.md
@@ -248,7 +248,7 @@ A solution is available on [GitHub][github] to deploy this reference architectur
5. Restart the jump box (*ra-adfs-mgmt-vm1* in the *ra-adfs-security-rg* group) to allow its DNS settings to take effect.
-6. [Obtain an SSL Certificate for AD FS][adfs_certificates] and install this certificate on the AD FS VMs. Note that you can connect to them through the jump box. The IP addresses are *10.0.5.4* and *10.0.5.5*. The default username is *contoso\testuser* with password *AweSome@PW*.
+6. [Obtain an SSL Certificate for AD FS][adfs_certificates] and install this certificate on the AD FS VMs. Note that you can connect to them through the jump box. The IP addresses are 10.0.5.4 and 10.0.5.5. The default username is contoso\testuser with password AweSome@PW.
> [!NOTE]
> The comments in the Deploy-ReferenceArchitecture.ps1 script at this point provides detailed instructions for creating a self-signed test certificate and authority using the `makecert` command. However, perform these steps as a **test** only and do not use the certificates generated by makecert in a production environment.
@@ -261,7 +261,7 @@ A solution is available on [GitHub][github] to deploy this reference architectur
.\Deploy-ReferenceArchitecture.ps1 Adfs
```
-8. On the jump box, browse to `https://adfs.contoso.com/adfs/ls/idpinitiatedsignon.htm` to test the AD FS installation (you may receive a certificate warning that you can ignore for this test). Verify that the Contoso Corporation sign-in page appears. Sign in as *contoso\testuser* with password *AweS0me@PW*.
+8. On the jump box, browse to `https://adfs.contoso.com/adfs/ls/idpinitiatedsignon.htm` to test the AD FS installation (you may receive a certificate warning that you can ignore for this test). Verify that the Contoso Corporation sign-in page appears. Sign in as contoso\testuser with password AweS0me@PW.
9. Install the SSL certificate on the AD FS proxy VMs. The IP addresses are *10.0.6.4* and *10.0.6.5*.
diff --git a/docs/reference-architectures/jenkins/index.md b/docs/reference-architectures/jenkins/index.md
index 8178fc7cb73..b1fc8247f1e 100644
--- a/docs/reference-architectures/jenkins/index.md
+++ b/docs/reference-architectures/jenkins/index.md
@@ -21,30 +21,30 @@ The focus of this document is on the core Azure operations needed to support Jen
The architecture consists of the following components:
-- **Resource group.** A [resource group][rg] is used to group Azure assets so they can be managed by lifetime, owner, and other criteria. Use resource groups to deploy and monitor Azure assets as a group and track billing costs by resource group. You can also delete resources as a set, which is very useful for test deployments.
+- **Resource group.** A [resource group][rg] is used to group Azure assets so they can be managed by lifetime, owner, and other criteria. Use resource groups to deploy and monitor Azure assets as a group and track billing costs by resource group. You can also delete resources as a set, which is very useful for test deployments.
-- **Jenkins server**. A virtual machine is deployed to run [Jenkins][azure-market] as an automation server and serve as Jenkins Master. This reference architecture uses the [solution template for Jenkins on Azure][solution], installed on a Linux (Ubuntu 16.04 LTS) virtual machine on Azure. Other Jenkins offerings are available in the Azure Marketplace.
+- **Jenkins server**. A virtual machine is deployed to run [Jenkins][azure-market] as an automation server and serve as Jenkins Master. This reference architecture uses the [solution template for Jenkins on Azure][solution], installed on a Linux (Ubuntu 16.04 LTS) virtual machine on Azure. Other Jenkins offerings are available in the Azure Marketplace.
- > [!NOTE]
- > Nginx is installed on the VM to act as a reverse proxy to Jenkins. You can configure Nginx to enable SSL for the Jenkins server.
- >
- >
+ > [!NOTE]
+ > Nginx is installed on the VM to act as a reverse proxy to Jenkins. You can configure Nginx to enable SSL for the Jenkins server.
+ >
+ >
-- **Virtual network**. A [virtual network][vnet] connects Azure resources to each other and provides logical isolation. In this architecture, the Jenkins server runs in a virtual network.
+- **Virtual network**. A [virtual network][vnet] connects Azure resources to each other and provides logical isolation. In this architecture, the Jenkins server runs in a virtual network.
-- **Subnets**. The Jenkins server is isolated in a [subnet][subnet] to make it easier to manage and segregate network traffic without impacting performance.
+- **Subnets**. The Jenkins server is isolated in a [subnet][subnet] to make it easier to manage and segregate network traffic without impacting performance.
-- **NSGs**. Use [network security groups][nsg] (NSGs) to restrict network traffic from the Internet to the subnet of a virtual network.
+- NSGs. Use [network security groups][nsg] (NSGs) to restrict network traffic from the Internet to the subnet of a virtual network.
-- **Managed disks**. A [managed disk][managed-disk] is a persistent virtual hard disk (VHD) used for application storage and also to maintain the state of the Jenkins server and provide disaster recovery. Data disks are stored in Azure Storage. For high performance, [premium storage][premium] is recommended.
+- **Managed disks**. A [managed disk][managed-disk] is a persistent virtual hard disk (VHD) used for application storage and also to maintain the state of the Jenkins server and provide disaster recovery. Data disks are stored in Azure Storage. For high performance, [premium storage][premium] is recommended.
-- **Azure Blob Storage**. The [Windows Azure Storage plugin][configure-storage] uses Azure Blob Storage to store the build artifacts that are created and shared with other Jenkins builds.
+- **Azure Blob Storage**. The [Windows Azure Storage plugin][configure-storage] uses Azure Blob Storage to store the build artifacts that are created and shared with other Jenkins builds.
-- **Azure Active Directory (Azure AD)**. [Azure AD][azure-ad] supports user authentication, allowing you to set up SSO. Azure AD [service principals][service-principal] define the policy and permissions for each role authorization in the workflow, using [role-based access control][rbac] (RBAC). Each service principal is associated with a Jenkins job.
+- Azure Active Directory (Azure AD). [Azure AD][azure-ad] supports user authentication, allowing you to set up SSO. Azure AD [service principals][service-principal] define the policy and permissions for each role authorization in the workflow, using [role-based access control][rbac] (RBAC). Each service principal is associated with a Jenkins job.
-- **Azure Key Vault.** To manage secrets and cryptographic keys used to provision Azure resources when secrets are required, this architecture uses [Key Vault][key-vault]. For added help storing secrets associated with the application in the pipeline, see also the [Azure Credentials][configure-credential] plugin for Jenkins.
+- **Azure Key Vault.** To manage secrets and cryptographic keys used to provision Azure resources when secrets are required, this architecture uses [Key Vault][key-vault]. For added help storing secrets associated with the application in the pipeline, see also the [Azure Credentials][configure-credential] plugin for Jenkins.
-- **Azure monitoring services**. This service [monitors][monitor] the Azure virtual machine hosting Jenkins. This deployment monitors the virtual machine status and CPU utilization and sends alerts.
+- **Azure monitoring services**. This service [monitors][monitor] the Azure virtual machine hosting Jenkins. This deployment monitors the virtual machine status and CPU utilization and sends alerts.
## Recommendations
diff --git a/docs/reference-architectures/virtual-machines-linux/multi-vm.md b/docs/reference-architectures/virtual-machines-linux/multi-vm.md
index 25813cd70ec..142ad6f8c85 100644
--- a/docs/reference-architectures/virtual-machines-linux/multi-vm.md
+++ b/docs/reference-architectures/virtual-machines-linux/multi-vm.md
@@ -137,9 +137,9 @@ Before you can deploy the reference architecture to your own subscription, you m
4. From a command prompt, bash prompt, or PowerShell prompt, login to your Azure account by using one of the commands below, and follow the prompts.
- ```bash
- az login
- ```
+ ```bash
+ az login
+ ```
### Deploy the solution using azbb
@@ -149,16 +149,16 @@ To deploy the sample single VM workload, follow these steps:
2. Open the `multi-vm-v2.json` file and enter a username and SSH key between the quotes, as shown below, then save the file.
- ```bash
- "adminUsername": "",
- "sshPublicKey": "",
- ```
+ ```bash
+ "adminUsername": "",
+ "sshPublicKey": "",
+ ```
3. Run `azbb` to deploy the VMs as shown below.
- ```bash
- azbb -s -g -l -p multi-vm-v2.json --deploy
- ```
+ ```bash
+ azbb -s -g -l -p multi-vm-v2.json --deploy
+ ```
For more information on deploying this sample reference architecture, visit our [GitHub repository][git].
diff --git a/docs/reference-architectures/virtual-machines-linux/n-tier.md b/docs/reference-architectures/virtual-machines-linux/n-tier.md
index b5b017c1c5a..d524137b10d 100644
--- a/docs/reference-architectures/virtual-machines-linux/n-tier.md
+++ b/docs/reference-architectures/virtual-machines-linux/n-tier.md
@@ -29,7 +29,7 @@ There are many ways to implement an N-tier architecture. The diagram shows a typ
* **Azure DNS**. [Azure DNS][azure-dns] is a hosting service for DNS domains, providing name resolution using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records using the same credentials, APIs, tools, and billing as your other Azure services.
* **Jumpbox.** Also called a [bastion host]. A secure VM on the network that administrators use to connect to the other VMs. The jumpbox has an NSG that allows remote traffic only from public IP addresses on a safe list. The NSG should permit secure shell (SSH) traffic.
* **Monitoring.** Monitoring software such as [Nagios], [Zabbix], or [Icinga] can give you insight into response time, VM uptime, and the overall health of your system. Install the monitoring software on a VM that's placed in a separate management subnet.
-* **NSGs.** Use [network security groups][nsg] (NSGs) to restrict network traffic within the VNet. For example, in the 3-tier architecture shown here, the database tier does not accept traffic from the web front end, only from the business tier and the management subnet.
+* NSGs. Use [network security groups][nsg] (NSGs) to restrict network traffic within the VNet. For example, in the 3-tier architecture shown here, the database tier does not accept traffic from the web front end, only from the business tier and the management subnet.
* **Apache Cassandra database**. Provides high availability at the data tier, by enabling replication and failover.
## Recommendations
@@ -125,15 +125,15 @@ Before you can deploy the reference architecture to your own subscription, you m
3. Install the [Azure building blocks][azbb] npm package.
- ```bash
- npm install -g @mspnp/azure-building-blocks
- ```
+ ```bash
+ npm install -g @mspnp/azure-building-blocks
+ ```
4. From a command prompt, bash prompt, or PowerShell prompt, login to your Azure account by using one of the commands below, and follow the prompts.
- ```bash
- az login
- ```
+ ```bash
+ az login
+ ```
### Deploy the solution using azbb
@@ -145,9 +145,9 @@ To deploy the Linux VMs for an N-tier application reference architecture, follow
3. Deploy the reference architecture using the **azbb** command line tool as shown below.
- ```bash
- azbb -s -g -l -p n-tier-linux.json --deploy
- ```
+ ```bash
+ azbb -s -g -l -p n-tier-linux.json --deploy
+ ```
For more information on deploying this sample reference architecture using Azure Building Blocks, visit the [GitHub repository][git].
diff --git a/docs/reference-architectures/virtual-machines-linux/single-vm.md b/docs/reference-architectures/virtual-machines-linux/single-vm.md
index c3907338a4a..69308591fdc 100644
--- a/docs/reference-architectures/virtual-machines-linux/single-vm.md
+++ b/docs/reference-architectures/virtual-machines-linux/single-vm.md
@@ -164,9 +164,9 @@ Before you can deploy the reference architecture to your own subscription, you m
4. From a command prompt, bash prompt, or PowerShell prompt, login to your Azure account by using one of the commands below, and follow the prompts.
- ```bash
- az login
- ```
+ ```bash
+ az login
+ ```
### Deploy the solution using azbb
@@ -176,16 +176,16 @@ To deploy the sample single VM workload, follow these steps:
2. Open the `single-vm-v2.json` file and enter a username and SSH public key between the quotes, as shown below, then save the file.
- ```bash
- "adminUsername": "",
- "sshPublicKey": "",
- ```
+ ```bash
+ "adminUsername": "",
+ "sshPublicKey": "",
+ ```
3. Run `azbb` to deploy the sample VM as shown below.
- ```bash
- azbb -s -g -l -p single-vm-v2.json --deploy
- ```
+ ```bash
+ azbb -s -g -l -p single-vm-v2.json --deploy
+ ```
For more information on deploying this sample reference architecture, visit our [GitHub repository][git].
diff --git a/docs/reference-architectures/virtual-machines-windows/multi-region-application.md b/docs/reference-architectures/virtual-machines-windows/multi-region-application.md
index 906b251057f..946110ff772 100644
--- a/docs/reference-architectures/virtual-machines-windows/multi-region-application.md
+++ b/docs/reference-architectures/virtual-machines-windows/multi-region-application.md
@@ -115,9 +115,9 @@ To configure the availability group:
* Create a [Windows Server Failover Clustering][wsfc] (WSFC) cluster that includes the SQL Server instances in both regions.
* Create a SQL Server Always On Availability Group that includes the SQL Server instances in both the primary and secondary regions. See [Extending Always On Availability Group to Remote Azure Datacenter (PowerShell)](https://blogs.msdn.microsoft.com/sqlcat/2014/09/22/extending-alwayson-availability-group-to-remote-azure-datacenter-powershell/) for the steps.
- * Put the primary replica in the primary region.
- * Put one or more secondary replicas in the primary region. Configure these to use synchronous commit with automatic failover.
- * Put one or more secondary replicas in the secondary region. Configure these to use *asynchronous* commit, for performance reasons. (Otherwise, all T-SQL transactions have to wait on a round trip over the network to the secondary region.)
+ * Put the primary replica in the primary region.
+ * Put one or more secondary replicas in the primary region. Configure these to use synchronous commit with automatic failover.
+ * Put one or more secondary replicas in the secondary region. Configure these to use *asynchronous* commit, for performance reasons. (Otherwise, all T-SQL transactions have to wait on a round trip over the network to the secondary region.)
> [!NOTE]
> Asynchronous commit replicas do not support automatic failover.
diff --git a/docs/reference-architectures/virtual-machines-windows/multi-vm.md b/docs/reference-architectures/virtual-machines-windows/multi-vm.md
index 1352413101c..1c813b7ef88 100644
--- a/docs/reference-architectures/virtual-machines-windows/multi-vm.md
+++ b/docs/reference-architectures/virtual-machines-windows/multi-vm.md
@@ -137,9 +137,9 @@ Before you can deploy the reference architecture to your own subscription, you m
4. From a command prompt, bash prompt, or PowerShell prompt, login to your Azure account by using one of the commands below, and follow the prompts.
- ```bash
- az login
- ```
+ ```bash
+ az login
+ ```
### Deploy the solution using azbb
@@ -149,16 +149,16 @@ To deploy the sample single VM workload, follow these steps:
2. Open the `multi-vm-v2.json` file and enter a username and password between the quotes, as shown below, then save the file.
- ```bash
- "adminUsername": "",
- "adminPassword": "",
- ```
+ ```bash
+ "adminUsername": "",
+ "adminPassword": "",
+ ```
3. Run `azbb` to deploy the VMs as shown below.
- ```bash
- azbb -s -g -l -p multi-vm-v2.json --deploy
- ```
+ ```bash
+ azbb -s -g -l -p multi-vm-v2.json --deploy
+ ```
For more information on deploying this sample reference architecture, visit our [GitHub repository][git].
diff --git a/docs/reference-architectures/virtual-machines-windows/n-tier.md b/docs/reference-architectures/virtual-machines-windows/n-tier.md
index 007bceb46aa..1d1993b8fb3 100644
--- a/docs/reference-architectures/virtual-machines-windows/n-tier.md
+++ b/docs/reference-architectures/virtual-machines-windows/n-tier.md
@@ -30,7 +30,7 @@ There are many ways to implement an N-tier architecture. The diagram shows a typ
* **Load balancers.** Use an [Internet-facing load balancer][load-balancer-external] to distribute incoming Internet traffic to the web tier, and an [internal load balancer][load-balancer-internal] to distribute network traffic from the web tier to the business tier.
* **Jumpbox.** Also called a [bastion host]. A secure VM on the network that administrators use to connect to the other VMs. The jumpbox has an NSG that allows remote traffic only from public IP addresses on a safe list. The NSG should permit remote desktop (RDP) traffic.
* **Monitoring.** Monitoring software such as [Nagios], [Zabbix], or [Icinga] can give you insight into response time, VM uptime, and the overall health of your system. Install the monitoring software on a VM that's placed in a separate management subnet.
-* **NSGs.** Use [network security groups][nsg] (NSGs) to restrict network traffic within the VNet. For example, in the 3-tier architecture shown here, the database tier does not accept traffic from the web front end, only from the business tier and the management subnet.
+* NSGs. Use [network security groups][nsg] (NSGs) to restrict network traffic within the VNet. For example, in the 3-tier architecture shown here, the database tier does not accept traffic from the web front end, only from the business tier and the management subnet.
* **SQL Server Always On Availability Group.** Provides high availability at the data tier, by enabling replication and failover.
* **Active Directory Domain Services (AD DS) Servers**. Prior to Windows Server 2016, SQL Server Always On Availability Groups must be joined to a domain. This is because Availability Groups depend on Windows Server Failover Cluster (WSFC) technology. Windows Server 2016 introduces the ability to create a Failover Cluster without Active Directory, in which case the AD DS servers are not required for this architecture. For more information, see [What's new in Failover Clustering in Windows Server 2016][wsfc-whats-new].
* **Azure DNS**. [Azure DNS][azure-dns] is a hosting service for DNS domains, providing name resolution using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records using the same credentials, APIs, tools, and billing as your other Azure services.
@@ -83,10 +83,10 @@ Configure the SQL Server Always On Availability Group as follows:
3. Create an availability group listener, and map the listener's DNS name to the IP address of an internal load balancer.
4. Create a load balancer rule for the SQL Server listening port (TCP port 1433 by default). The load balancer rule must enable *floating IP*, also called Direct Server Return. This causes the VM to reply directly to the client, which enables a direct connection to the primary replica.
- > [!NOTE]
- > When floating IP is enabled, the front-end port number must be the same as the back-end port number in the load balancer rule.
- >
- >
+ > [!NOTE]
+ > When floating IP is enabled, the front-end port number must be the same as the back-end port number in the load balancer rule.
+ >
+ >
When a SQL client tries to connect, the load balancer routes the connection request to the primary replica. If there is a failover to another replica, the load balancer automatically routes subsequent requests to a new primary replica. For more information, see [Configure an ILB listener for SQL Server Always On Availability Groups][sql-alwayson-ilb].
@@ -142,15 +142,15 @@ Before you can deploy the reference architecture to your own subscription, you m
3. Install the [Azure building blocks][azbb] npm package.
- ```bash
- npm install -g @mspnp/azure-building-blocks
- ```
+ ```bash
+ npm install -g @mspnp/azure-building-blocks
+ ```
4. From a command prompt, bash prompt, or PowerShell prompt, login to your Azure account by using one of the commands below, and follow the prompts.
- ```bash
- az login
- ```
+ ```bash
+ az login
+ ```
### Deploy the solution using azbb
@@ -160,18 +160,18 @@ To deploy the Windows VMs for an N-tier application reference architecture, foll
2. The parameter file specifies a default adminstrator user name and password for each VM in the deployment. You must change these before you deploy the reference architecture. Open the `n-tier-windows.json` file and replace each **adminUsername** and **adminPassword** field with your new settings.
- > [!NOTE]
- > There are multiple scripts that run during this deployment both in the **VirtualMachineExtension** objects and in the **extensions** settings for some of the **VirtualMachine** objects. Some of these scripts require the administrator user name and password that you have just changed. It's recommended that you review these scripts to ensure that you specified the correct credentials. The deployment may fail if you have not specified the correct credentials.
- >
- >
+ > [!NOTE]
+ > There are multiple scripts that run during this deployment both in the **VirtualMachineExtension** objects and in the **extensions** settings for some of the **VirtualMachine** objects. Some of these scripts require the administrator user name and password that you have just changed. It's recommended that you review these scripts to ensure that you specified the correct credentials. The deployment may fail if you have not specified the correct credentials.
+ >
+ >
Save the file.
3. Deploy the reference architecture using the **azbb** command line tool as shown below.
- ```bash
- azbb -s -g -l -p n-tier-windows.json --deploy
- ```
+ ```bash
+ azbb -s -g -l -p n-tier-windows.json --deploy
+ ```
For more information on deploying this sample reference architecture using Azure Building Blocks, visit the [GitHub repository][git].
diff --git a/docs/reference-architectures/virtual-machines-windows/single-vm.md b/docs/reference-architectures/virtual-machines-windows/single-vm.md
index 0fe986028d9..940ba0327c1 100644
--- a/docs/reference-architectures/virtual-machines-windows/single-vm.md
+++ b/docs/reference-architectures/virtual-machines-windows/single-vm.md
@@ -146,9 +146,9 @@ Before you can deploy the reference architecture to your own subscription, you m
4. From a command prompt, bash prompt, or PowerShell prompt, login to your Azure account by using one of the commands below, and follow the prompts.
- ```bash
- az login
- ```
+ ```bash
+ az login
+ ```
### Deploy the solution using azbb
@@ -158,16 +158,16 @@ To deploy the sample single VM workload, follow these steps:
2. Open the `single-vm-v2.json` file and enter a username and SSH key between the quotes, as shown below, then save the file.
- ```bash
- "adminUsername": "",
- "adminPassword": "",
- ```
+ ```bash
+ "adminUsername": "",
+ "adminPassword": "",
+ ```
3. Run `azbb` to deploy the sample VM as shown below.
- ```bash
- azbb -s -g -l -p single-vm-v2.json --deploy
- ```
+ ```bash
+ azbb -s -g -l -p single-vm-v2.json --deploy
+ ```
For more information on deploying this sample reference architecture, visit our [GitHub repository][git].
diff --git a/docs/resiliency/failure-mode-analysis.md b/docs/resiliency/failure-mode-analysis.md
index ac297d2f5e4..aebcd49c0e4 100644
--- a/docs/resiliency/failure-mode-analysis.md
+++ b/docs/resiliency/failure-mode-analysis.md
@@ -118,7 +118,7 @@ The default retry policy uses exponential back-off. To use a different retry pol
### Web or worker roles are unexpectedly being shut down.
**Detection**. The [RoleEnvironment.Stopping][RoleEnvironment.Stopping] event is fired.
-**Recovery**. Override the [RoleEntryPoint.OnStop][RoleEntryPoint.OnStop] method to gracefully clean up. For more information, see [The Right Way to Handle Azure OnStop Events][onstop-events] (blog).
+Recovery. Override the [RoleEntryPoint.OnStop][RoleEntryPoint.OnStop] method to gracefully clean up. For more information, see [The Right Way to Handle Azure OnStop Events][onstop-events] (blog).
## Cosmos DB
### Reading data fails.
diff --git a/docs/resiliency/high-availability-azure-applications.md b/docs/resiliency/high-availability-azure-applications.md
index 8993bf86113..e6c497bc7d8 100644
--- a/docs/resiliency/high-availability-azure-applications.md
+++ b/docs/resiliency/high-availability-azure-applications.md
@@ -5,6 +5,7 @@ author: adamglick
ms.date: 05/31/2017
---
[!INCLUDE [header](../_includes/header.md)]
+
# High availability for applications built on Microsoft Azure
A highly available application absorbs fluctuations in availability, load, and temporary failures in dependent services and hardware. The application continues to perform acceptably, as defined by business requirements or application service-level agreements (SLAs).
diff --git a/docs/resiliency/recovery-local-failures.md b/docs/resiliency/recovery-local-failures.md
index 66dbc6f19f6..422dc06659c 100644
--- a/docs/resiliency/recovery-local-failures.md
+++ b/docs/resiliency/recovery-local-failures.md
@@ -5,6 +5,7 @@ author: adamglick
ms.date: 08/18/2016
---
[!INCLUDE [header](../_includes/header.md)]
+
# Azure resiliency technical guidance: Recovery from local failures in Azure
There are two primary threats to application availability:
diff --git a/docs/resiliency/recovery-loss-azure-region.md b/docs/resiliency/recovery-loss-azure-region.md
index 8ef10b9d69e..e8089683566 100644
--- a/docs/resiliency/recovery-loss-azure-region.md
+++ b/docs/resiliency/recovery-loss-azure-region.md
@@ -5,6 +5,7 @@ author: adamglick
ms.date: 08/18/2016
---
[!INCLUDE [header](../_includes/header.md)]
+
# Azure resiliency technical guidance: recovery from a region-wide service disruption
Azure is divided physically and logically into units called regions. A region consists of one or more datacenters in close proximity.
diff --git a/docs/resiliency/recovery-on-premises-azure.md b/docs/resiliency/recovery-on-premises-azure.md
index 7ea38ea268b..0fe25e5ccbc 100644
--- a/docs/resiliency/recovery-on-premises-azure.md
+++ b/docs/resiliency/recovery-on-premises-azure.md
@@ -5,6 +5,7 @@ author: adamglick
ms.date: 08/18/2016
---
[!INCLUDE [header](../_includes/header.md)]
+
# Azure resiliency technical guidance: Recovery from on-premises to Azure
Azure provides a comprehensive set of services for enabling the extension of an on-premises datacenter to Azure for high availability and disaster recovery purposes:
| | | | | | | | |