Skip to content

Commit

Permalink
CI Update
Browse files Browse the repository at this point in the history
  • Loading branch information
VSC-Service-Account committed Mar 23, 2018
1 parent dd388c2 commit d10291d
Show file tree
Hide file tree
Showing 93 changed files with 845 additions and 737 deletions.
8 changes: 4 additions & 4 deletions docs/antipatterns/busy-database/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,10 +114,10 @@ INNER JOIN [Person].[Person] p ON c.[PersonID] = p.[BusinessEntityID]
INNER JOIN [Sales].[SalesOrderDetail] sod ON soh.[SalesOrderID] = sod.[SalesOrderID]
WHERE soh.[TerritoryId] = @TerritoryId
AND soh.[SalesOrderId] IN (
SELECT TOP 20 SalesOrderId
FROM [Sales].[SalesOrderHeader] soh
WHERE soh.[TerritoryId] = @TerritoryId
ORDER BY soh.[TotalDue] DESC)
SELECT TOP 20 SalesOrderId
FROM [Sales].[SalesOrderHeader] soh
WHERE soh.[TerritoryId] = @TerritoryId
ORDER BY soh.[TotalDue] DESC)
ORDER BY soh.[TotalDue] DESC, sod.[SalesOrderDetailID]
```

Expand Down
6 changes: 4 additions & 2 deletions docs/aws-professional/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,9 +48,11 @@ account are tied to that account, subscriptions exist independently of their
owner accounts, and can be reassigned to new owners as needed.

![Comparison of structure and ownership of AWS accounts and Azure subscriptions](./images/azure-aws-account-compare.png "Comparison of structure and ownership of AWS accounts and Azure subscriptions")
<br/>*Comparison of structure and ownership of AWS accounts and Azure subscriptions*
<br/><em>Comparison of structure and ownership of AWS accounts and Azure subscriptions</em>

<br/><br/>


Subscriptions are assigned three types of administrator accounts:

- **Account Administrator** - The subscription owner and the
Expand Down Expand Up @@ -360,7 +362,7 @@ allow you to create and manage the following storage services:
storage](https://azure.microsoft.com/documentation/articles/storage-java-how-to-use-file-storage/) - offers shared storage for legacy applications using the standard server
message block (SMB) protocol. File storage is used in a similar manner to
EFS in the AWS platform.

#### Glacier and Azure Storage

[Azure Archive Blob Storage](/azure/storage/blobs/storage-blob-storage-tiers#archive-access-tier) is comparable to AWS Glacier storage service. It is intended for rarely accessed data that is stored for at least 180 days and can tolerate several hours of retrieval latency.
Expand Down
276 changes: 134 additions & 142 deletions docs/aws-professional/services.md

Large diffs are not rendered by default.

13 changes: 10 additions & 3 deletions docs/best-practices/monitoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,7 @@ All timeouts, network connectivity failures, and connection retry attempts must

<a name="analyzing-availability-data"></a>


### Analyzing availability data
The instrumentation data must be aggregated and correlated to support the following types of analysis:

Expand Down Expand Up @@ -199,6 +200,7 @@ A feature of security monitoring is the variety of sources from which the data a

<a name="SLA-monitoring"></a>


## SLA monitoring
Many commercial systems that support paying customers make guarantees about the performance of the system in the form of SLAs. Essentially, SLAs state that the system can handle a defined volume of work within an agreed time frame and without losing critical information. SLA monitoring is concerned with ensuring that the system can meet measurable SLAs.

Expand Down Expand Up @@ -313,6 +315,7 @@ For metering purposes, you also need to be able to identify which users are resp

<a name="issue-tracking"></a>


## Issue tracking
Customers and other users might report issues if unexpected events or behavior occurs in the system. Issue tracking is concerned with managing these issues, associating them with efforts to resolve any underlying problems in the system, and informing customers of possible resolutions.

Expand Down Expand Up @@ -392,16 +395,16 @@ Security issues might occur at any point in the system. For example, a user migh
The section [Instrumenting an application](#instrumenting-an-application) contains more guidance on the information that you should capture. But you can use a variety of strategies to gather this information:

* **Application/system monitoring**. This strategy uses internal sources within the application, application frameworks, operating system, and infrastructure. The application code can generate its own monitoring data at notable points during the lifecycle of a client request. The application can include tracing statements that might be selectively enabled or disabled as circumstances dictate. It might also be possible to inject diagnostics dynamically by using a diagnostics framework. These frameworks typically provide plug-ins that can attach to various instrumentation points in your code and capture trace data at these points.

Additionally, your code and/or the underlying infrastructure might raise events at critical points. Monitoring agents that are configured to listen for these events can record the event information.
* **Real user monitoring**. This approach records the interactions between a user and the application and observes the flow of each request and response. This information can have a two-fold purpose: it can be used for metering usage by each user, and it can be used to determine whether users are receiving a suitable quality of service (for example, fast response times, low latency, and minimal errors). You can use the captured data to identify areas of concern where failures occur most often. You can also use the data to identify elements where the system slows down, possibly due to hotspots in the application or some other form of bottleneck. If you implement this approach carefully, it might be possible to reconstruct users' flows through the application for debugging and testing purposes.

> [!IMPORTANT]
> You should consider the data that's captured by monitoring real users to be highly sensitive because it might include confidential material. If you save captured data, store it securely. If you want to use the data for performance monitoring or debugging purposes, strip out all personally identifiable information first.
>
>
* **Synthetic user monitoring**. In this approach, you write your own test client that simulates a user and performs a configurable but typical series of operations. You can track the performance of the test client to help determine the state of the system. You can also use multiple instances of the test client as part of a load-testing operation to establish how the system responds under stress, and what sort of monitoring output is generated under these conditions.

> [!NOTE]
> You can implement real and synthetic user monitoring by including code that traces and times the execution of method calls and other critical parts of an application.
>
Expand All @@ -413,6 +416,7 @@ For maximum coverage, you should use a combination of these techniques.

<a name="instrumenting-an-application"></a>


## Instrumenting an application
Instrumentation is a critical part of the monitoring process. You can make meaningful decisions about the performance and health of a system only if you first capture the data that enables you to make these decisions. The information that you gather by using instrumentation should be sufficient to enable you to assess performance, diagnose problems, and make decisions without requiring you to sign in to a remote production server to perform tracing (and debugging) manually. Instrumentation data typically comprises metrics and information that's written to trace logs.

Expand All @@ -429,6 +433,7 @@ Metrics will generally be a measure or count of some aspect or resource in the s

<a name="information-for-correlating-data"></a>


### Information for correlating data
You can easily monitor individual system-level performance counters, capture metrics for resources, and obtain application trace information from various log files. But some forms of monitoring require the analysis and diagnostics stage in the monitoring pipeline to correlate the data that's retrieved from several sources. This data might take several forms in the raw data, and the analysis process must be provided with sufficient instrumentation data to be able to map these different forms. For example, at the application framework level, a task might be identified by a thread ID. Within an application, the same work might be associated with the user ID for the user who is performing that task.

Expand Down Expand Up @@ -537,6 +542,7 @@ For scalability, you can run multiple instances of the storage writing service.

<a name="consolidating-instrumentation-data"></a>


#### *Consolidating instrumentation data*
The instrumentation data that the data-collection service retrieves from a single instance of an application gives a localized view of the health and performance of that instance. To assess the overall health of the system, it's necessary to consolidate some aspects of the data in the local views. You can perform this after the data has been stored, but in some cases, you can also achieve it as the data is collected. Rather than being written directly to shared storage, the instrumentation data can pass through a separate data consolidation service that combines data and acts as a filter and cleanup process. For example, instrumentation data that includes the same correlation information such as an activity ID can be amalgamated. (It's possible that a user starts performing a business operation on one node and then gets transferred to another node in the event of node failure, or depending on how load balancing is configured.) This process can also detect and remove any duplicated data (always a possibility if the telemetry service uses message queues to push instrumentation data out to storage). Figure 5 illustrates an example of this structure.

Expand Down Expand Up @@ -591,6 +597,7 @@ As described in the section [Consolidating instrumentation data](#consolidating-

<a name="supporting-hot-warm-and-cold-analysis"></a>


### Supporting hot, warm, and cold analysis
Analyzing and reformatting data for visualization, reporting, and alerting purposes can be a complex process that consumes its own set of resources. Some forms of monitoring are time-critical and require immediate analysis of data to be effective. This is known as *hot analysis*. Examples include the analyses that are required for alerting and some aspects of security monitoring (such as detecting an attack on the system). Data that's required for these purposes must be quickly available and structured for efficient processing. In some cases, it might be necessary to move the analysis processing to the individual nodes where the data is held.

Expand Down
3 changes: 1 addition & 2 deletions docs/best-practices/retry-service-specific.md
Original file line number Diff line number Diff line change
Expand Up @@ -474,7 +474,6 @@ public async static Task<SqlDataReader> ExecuteReaderWithRetryAsync(this SqlComm

}, cancellationToken);
}

```

This asynchronous extension method can be used as follows.
Expand Down Expand Up @@ -788,7 +787,7 @@ namespace RetryCodeSamples
try
{
var retryTimeInMilliseconds = TimeSpan.FromSeconds(4).Milliseconds; // delay between retries
// Using object-based configuration.
var options = new ConfigurationOptions
{
Expand Down
1 change: 1 addition & 0 deletions docs/building-blocks/extending-templates/collector.md
Original file line number Diff line number Diff line change
Expand Up @@ -302,6 +302,7 @@ Finally, our `Microsoft.Network/networkSecurityGroups` resource directly assigns
* This technique is implemented in the [template building blocks project](https://github.com/mspnp/template-building-blocks) and the [Azure reference architectures](/azure/architecture/reference-architectures/). You can use these to create your own architecture or deploy one of our reference architectures.

<!-- links -->

[objects-as-parameters]: ./objects-as-parameters.md
[resource-manager-linked-template]: /azure/azure-resource-manager/resource-group-linked-templates
[resource-manager-variables]: /azure/azure-resource-manager/resource-group-template-functions-deployment
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,7 @@ Now that we've worked around the validation issue, we can simply specify the dep
* This technique is implemented in the [template building blocks project](https://github.com/mspnp/template-building-blocks) and the [Azure reference architectures](/azure/architecture/reference-architectures/). You can use these to create your own architecture or deploy one of our reference architectures.

<!-- links -->

[azure-resource-manager-condition]: /azure/azure-resource-manager/resource-group-authoring-templates#resources
[azure-resource-manager-variable]: /azure/azure-resource-manager/resource-group-authoring-templates#variables
[vnet-peering-resource-schema]: /azure/templates/microsoft.network/virtualnetworks/virtualnetworkpeerings
Original file line number Diff line number Diff line change
Expand Up @@ -291,7 +291,6 @@ Now let's take a look at our template. Our first resource named `NSG1` deploys t
],
"outputs": {}
}

```

Let's take a closer look at how we specify our property values in the `securityRules` child resource. All of our properties are referenced using the `parameter()` function, and then we use the dot operator to reference our `securityRules` array, indexed by the current value of the iteration. Finally, we use another dot operator to reference the name of the object.
Expand All @@ -300,25 +299,26 @@ Let's take a closer look at how we specify our property values in the `securityR

If you would like to experiment with this template, follow these steps:

1. Go to the Azure portal, select the **+** icon, and search for the **template deployment** resource type, and select it.
2. Navigate to the **template deployment** page, select the **create** button. This button opens the **custom deployment** blade.
3. Select the **edit template** button.
4. Delete the empty template.
5. Copy and paste the sample template into the right pane.
6. Select the **save** button.
7. When you are returned to the **custom deployment** pane, select the **edit parameters** button.
1. Go to the Azure portal, select the **+** icon, and search for the **template deployment** resource type, and select it.
2. Navigate to the **template deployment** page, select the **create** button. This button opens the **custom deployment** blade.
3. Select the **edit template** button.
4. Delete the empty template.
5. Copy and paste the sample template into the right pane.
6. Select the **save** button.
7. When you are returned to the **custom deployment** pane, select the **edit parameters** button.
8. On the **edit parameters** blade, delete the existing template.
9. Copy and paste the sample parameter template from above.
10. Select the **save** button, which returns you to the **custom deployment** blade.
11. On the **custom deployment** blade, select your subscription, either create new or use existing resource group, and select a location. Review the terms and conditions, and select the **I agree** checkbox.
12. Select the **purchase** button.
12. Select the **purchase** button.

## Next steps

* You can expand upon these techniques to implement a [property object transformer and collector](./collector.md). The transformer and collector techniques are more general and can be linked from your templates.
* This technique is also implemented in the [template building blocks project](https://github.com/mspnp/template-building-blocks) and the [Azure reference architectures](/azure/architecture/reference-architectures/). You can review our templates to see how we've implemented this technique.

<!-- links -->

[azure-resource-manager-authoring-templates]: /azure/azure-resource-manager/resource-group-authoring-templates
[azure-resource-manager-create-template]: /azure/azure-resource-manager/resource-manager-create-first-template
[azure-resource-manager-create-multiple-instances]: /azure/azure-resource-manager/resource-group-create-multiple
Expand Down
16 changes: 8 additions & 8 deletions docs/building-blocks/extending-templates/update-resource.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,14 +120,14 @@ Let's take a look at the resource object for our `firstVNet` resource first. Not

If you would like to experiment with this template, follow these steps:

1. Go to the Azure portal, select the **+** icon, and search for the **template deployment** resource type, and select it.
2. Navigate to the **template deployment** page, select the **create** button. This button opens the **custom deployment** blade.
3. Select the **edit** icon.
4. Delete the empty template.
5. Copy and paste the sample template into the right pane.
6. Select the **save** button.
7. You return to the **custom deployment** pane, but this time there are some drop-down list boxes. Select your subscription, either create new or use existing resource group, and select a location. Review the terms and conditions, then select the **I agree** button.
8. Select the **purchase** button.
1. Go to the Azure portal, select the **+** icon, and search for the **template deployment** resource type, and select it.
2. Navigate to the **template deployment** page, select the **create** button. This button opens the **custom deployment** blade.
3. Select the **edit** icon.
4. Delete the empty template.
5. Copy and paste the sample template into the right pane.
6. Select the **save** button.
7. You return to the **custom deployment** pane, but this time there are some drop-down list boxes. Select your subscription, either create new or use existing resource group, and select a location. Review the terms and conditions, then select the **I agree** button.
8. Select the **purchase** button.

Once deployment has finished, open the resource group you specified in the portal. You see a virtual network named `firstVNet` and a NIC named `nic1`. Click `firstVNet`, then click `subnets`. You see the `firstSubnet` that was originally created, and you see the `secondSubnet` that was added in the `updateVNet` resource.

Expand Down
1 change: 1 addition & 0 deletions docs/checklist/availability.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,4 +75,5 @@ Availability is the proportion of time that a system is functional and working,
**Plan for disaster recovery.** Create an accepted, fully-tested plan for recovery from any type of failure that may affect system availability. Choose a multi-site disaster recovery architecture for any mission-critical applications. Identify a specific owner of the disaster recovery plan, including automation and testing. Ensure the plan is well-documented, and automate the process as much as possible. Establish a backup strategy for all reference and transactional data, and test the restoration of these backups regularly. Train operations staff to execute the plan, and perform regular disaster simulations to validate and improve the plan.

<!-- links -->

[availability-sets]:/azure/virtual-machines/virtual-machines-windows-manage-availability/
2 changes: 1 addition & 1 deletion docs/checklist/dev-ops.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ Shared documentation is critical. Encourage team members to contribute and share

**Follow least-privilege principles when granting access to resources.** Carefully manage access to resources. Access should be denied by default, unless a user is explicitly given access to a resource. Only grant a user access to what they need to complete their tasks. Track user permissions and perform regular security audits.

**Use role-based access control.** Assigning user accounts and access to resources should not be a manual process. Use [Role-Based Access Control][rbac] (RBAC) grant access based on [Azure Active Directory][azure-ad] identities and groups.
<strong>Use role-based access control.</strong> Assigning user accounts and access to resources should not be a manual process. Use [Role-Based Access Control][rbac] (RBAC) grant access based on [Azure Active Directory][azure-ad] identities and groups.

**Use a bug tracking system to track issues.** Without a good way to track issues, it's easy to miss items, duplicate work, or introduce additional problems. Don't rely on informal person-to-person communication to track the status of bugs. Use a bug tracking tool to record details about problems, assign resources to address them, and provide an audit trail of progress and status.

Expand Down
1 change: 1 addition & 0 deletions docs/checklist/resiliency-per-service.md
Original file line number Diff line number Diff line change
Expand Up @@ -138,6 +138,7 @@ If you are using Redis Cache as a temporary data cache and not as a persistent s
**Enable Load Balancer logging.** The logs show how many VMs on the back-end are not receiving network traffic due to failed probe responses. For more information, see [Log analytics for Azure Load Balancer](/azure/load-balancer/load-balancer-monitor-log/).

<!-- links -->

[boot-diagnostics]: https://azure.microsoft.com/blog/boot-diagnostics-for-virtual-machines-v2/
[diagnostics-logs]: /azure/monitoring-and-diagnostics/monitoring-overview-of-diagnostic-logs/
[managed-disks]: /azure/storage/storage-managed-disks-overview
Expand Down
1 change: 1 addition & 0 deletions docs/checklist/resiliency.md
Original file line number Diff line number Diff line change
Expand Up @@ -158,6 +158,7 @@ Resiliency is the ability of a system to recover from failures and continue to f


<!-- links -->

[app-service-autoscale]: /azure/monitoring-and-diagnostics/insights-how-to-scale/
[asynchronous-c-sharp]: /dotnet/articles/csharp/async
[availability-sets]:/azure/virtual-machines/virtual-machines-windows-manage-availability/
Expand Down
Loading

0 comments on commit d10291d

Please sign in to comment.