Skip to content

Commit

Permalink
Merge pull request #412 from sharad4u/lbchoices
Browse files Browse the repository at this point in the history
Update application architecture guide to include load balancing choices
  • Loading branch information
v-albemi authored Aug 13, 2019
2 parents db76f97 + 8b00913 commit 3a4c65d
Show file tree
Hide file tree
Showing 7 changed files with 115 additions and 7 deletions.
Binary file added docs/guide/images/load-balancing-choices.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 5 additions & 3 deletions docs/guide/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Relational database<br/>
Strong consistency<br/>
Serial and synchronized processing<br/>
Design to avoid failures (MTBF)<br/>
Occasional big updates<br/>
Occasional large updates<br/>
Manual management<br/>
Snowflake servers</td>
<td>
Expand Down Expand Up @@ -67,16 +67,18 @@ Learn more:

### Technology choices

Two technology choices should be decided early on, because they affect the entire architecture. These are the choice of compute service and data stores. *Compute* refers to the hosting model for the computing resources that your applications runs on. *Data stores* includes databases but also storage for message queues, caches, logs, and anything else that an application might persist to storage.
Two technology choices should be decided early on, because they affect the entire architecture. These are the choice of compute service and data stores. *Compute* refers to the hosting model for the computing resources that your applications run on. *Data stores include databases but also storage for message queues, caches, logs, and anything else that an application might persist to storage.

Learn more:

- [Choosing a compute service](./technology-choices/compute-overview.md)
- [Choosing a data store](./technology-choices/data-store-overview.md)

While it depends on your application's requirements, there is a high likelihood that you will also need to choose the right load-balancing services for your application early on in the architectural discussions. *Load balancing* defines how the traffic for your application will be distributed to your compute service. Learn more at [Choosing a load balancing service](./technology-choices/load-balancing-overview.md)

### Design principles

We have identified ten high-level design principles that will make your application more scalable, resilient, and manageable. These design principles apply to any architecture styles. Throughout the design process, keep these ten high-level design principles in mind. Then consider the set of best practices for specific aspects of the architecture, such as auto-scaling, caching, data partitioning, API design, and others.
We have identified ten high-level design principles that will make your application more scalable, resilient, and manageable. These design principles apply to any architecture styles. Throughout the design process, keep these ten high-level design principles in mind. Then consider the set of best practices for specific aspects of the architecture, such as autoscaling, caching, data partitioning, API design, and others.

Learn more:

Expand Down
8 changes: 4 additions & 4 deletions docs/guide/technology-choices/compute-comparison.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ ms.custom: seojan19

# Criteria for choosing an Azure compute service

The term *compute* refers to the hosting model for the computing resources that your applications runs on. The following tables compare Azure compute services across several axes. Refer to these tables when selecting a compute option for your application.
The term *compute* refers to the hosting model for the computing resources that your applications run on. The following tables compare Azure compute services across several axes. Refer to these tables when selecting a compute option for your application.

## Hosting model

Expand Down Expand Up @@ -43,7 +43,7 @@ Notes
| Criteria | Virtual Machines | App Service | Service Fabric | Azure Functions | Azure Kubernetes Service | Container Instances | Azure Batch |
|----------|-----------------|-------------|----------------|-----------------|-------------------------|----------------|-------------|
| Local debugging | Agnostic | IIS Express, others <a href="#note1b"><sup>1</sup></a> | Local node cluster | Visual Studio or Azure Functions CLI | Minikube, others | Local container runtime | Not supported |
| Programming model | Agnostic | Web and API applications, WebJobs for background tasks | Guest executable, Service model, Actor model, Containers | Functions with triggers | Agnostic | Agnostic | Command line application |
| Programming model | Agnostic | Web and API applications, WebJobs for background tasks | Guest executable, Service model, Actor model, Containers | Functions with triggers | Agnostic | Agnostic | Command-line application |
| Application update | No built-in support | Deployment slots | Rolling upgrade (per service) | Deployment slots | Rolling update | Not applicable |

Notes
Expand All @@ -57,7 +57,7 @@ Notes
|----------|-----------------|-------------|----------------|-----------------|-------------------------|----------------|-------------|
| Autoscaling | Virtual machine scale sets | Built-in service | Virtual machine scale sets | Built-in service | Not supported | Not supported | N/A |
| Load balancer | Azure Load Balancer | Integrated | Azure Load Balancer | Integrated | Integrated | No built-in support | Azure Load Balancer |
| Scale limit<a href="#note1c"><sup>1</sup></a> | Platform image: 1000 nodes per VMSS, Custom image: 100 nodes per VMSS | 20 instances, 100 with App Service Environment | 100 nodes per VMSS | 200 instances per Function app | 100 nodes per cluster (default limit) |20 container groups per subscription (default limit). | 20 core limit (default limit). |
| Scale limit<a href="#note1c"><sup>1</sup></a> | Platform image: 1000 nodes per virtual machine scale set, Custom image: 100 nodes per virtual machine scale set | 20 instances, 100 with App Service Environment | 100 nodes per virtual machine scale set | 200 instances per Function app | 100 nodes per cluster (default limit) |20 container groups per subscription (default limit). | 20 core limit (default limit). |

Notes

Expand All @@ -68,7 +68,7 @@ Notes
| Criteria | Virtual Machines | App Service | Service Fabric | Azure Functions | Azure Kubernetes Service | Container Instances | Azure Batch |
|----------|-----------------|-------------|----------------|-----------------|-------------------------|----------------|-------------|
| SLA | [SLA for Virtual Machines][sla-vm] | [SLA for App Service][sla-app-service] | [SLA for Service Fabric][sla-sf] | [SLA for Functions][sla-functions] | [SLA for AKS][sla-acs] | [SLA for Container Instances](https://azure.microsoft.com/support/legal/sla/container-instances/) | [SLA for Azure Batch][sla-batch] |
| Multi region failover | Traffic manager | Traffic manager | Traffic manager, Multi-Region Cluster | Not supported | Traffic manager | Not supported | Not Supported |
| Multi region failover | Azure Front Door (HTTP/HTTPS) </br> Traffic manager (Other) | Azure Front Door | Azure Front Door (HTTP/HTTPS) </br> Traffic manager (Other), Multi-Region Cluster | Azure Front Door | Azure Front Door (HTTP/HTTPS) </br> Traffic manager (Other) | Not supported | Not Supported |

For guided learning on Service Guarantees, review [Core Cloud Services - Azure architecture and service guarantees](/learn/modules/explore-azure-infrastructure).

Expand Down
42 changes: 42 additions & 0 deletions docs/guide/technology-choices/load-balancing-decision-tree.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
---
title: Decision tree for load balancing in Azure
titleSuffix: Azure Application Architecture Guide
description: A flowchart for selecting load-balancing services in Azure.
author: sharadag
ms.date: 05/25/2019
ms.topic: guide
ms.service: architecture-center
ms.subservice: reference-architecture
ms.custom: sharad4u
---

# Decision tree for load balancing in Azure

Azure provides you with a variety of different load-balancing solutions that you can leverage to distribute traffic between your different application endpoints. This distribution of traffic could be across your virtual machines, containers, Kubernetes clusters, App Services, in-region within a virtual network or across different Azure regions. The following flowchart will help you to choose a load-balancing solution for your application. The flowchart guides you through a set of key decision criteria to reach a recommendation.

**Treat this flowchart as a starting point.** Every application has unique requirements, so use the recommendation as a starting point. Then perform a more detailed evaluation, looking at aspects such as:

- Feature set
- [Service limits](/azure/azure-subscription-service-limits)
- [Cost](https://azure.microsoft.com/pricing/)
- [SLA](https://azure.microsoft.com/support/legal/sla/)
- [Regional availability](https://azure.microsoft.com/global-infrastructure/services/)

If your application consists of multiple workloads, evaluate each workload separately. A complete solution may incorporate two or more load-balancing solutions.

## Flowchart

![Decision tree for load balancing in Azure](../images/load-balancing-decision-tree.png)

## Definitions

- **"Internet facing"** applications are the ones that are publicly accessible from the internet. This is an application architecture choice that is common for consumer as well as business applications. As a best practice, application owners apply restrictive access policies or protect the application by setting up offerings like web application firewall and DDoS protection.

- **PaaS** Platform as a service (PaaS) is a complete development and deployment environment in the cloud, with resources that enable you to deliver everything from simple cloud-based apps to sophisticated, cloud-enabled enterprise applications. PaaS is designed to support the complete web application lifecycle: building, testing, deploying, managing, and updating. PaaS allows you to avoid the expense and complexity of buying and managing software licenses, the underlying application infrastructure and middleware or the development tools and other resources. You manage the applications and services you develop, and the cloud service provider typically manages everything else.

- **IaaS** Infrastructure as a service (IaaS) is an instant computing infrastructure, provisioned and managed over the internet. IaaS quickly scales up and down with demand, letting you pay only for what you use. It helps you avoid the expense and complexity of buying and managing your own physical servers and other datacenter infrastructure. Azure, manages the infrastructure, while you purchase, install, configure, and manage your own software—operating systems, middleware, and applications.


## Next steps

For additional context on these different load-balancing services, see [Overview of load balancing options in Azure](./load-balancing-overview.md).
58 changes: 58 additions & 0 deletions docs/guide/technology-choices/load-balancing-overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
---
title: Overview of Azure load-balancing options
titleSuffix: Azure Application Architecture Guide
description: An overview of Azure load-balancing options.
author: sharad4u
ms.date: 08/03/2019
ms.topic: guide
ms.service: architecture-center
ms.subservice: reference-architecture
ms.custom: seojan19
---

# Overview of load-balancing options in Azure

The term *load balancing* refers to the distribution of workloads across multiple computing resources that your application runs on. Load balancing aims to optimize resource use, maximize throughput, minimize response time, and avoid overload of any single resource.

## Overview

Azure provides multiple global and regional services for managing how your application's traffic is distributed and load balanced: **Traffic Manager**, **Azure Front Door**, **Application Gateway**, and **Azure Load Balancer**. Along with Azure’s many regions and zonal architecture, using these services together enable you to build robust, scalable high-performance applications.

![Load-balancing choices in Azure](../images/load-balancing-choices.png)

These services are broken into two categories:

- **Global load-balancing services** such as Traffic Manager and Front Door distribute traffic from your end users across your regional backends, across clouds or even your hybrid on-premises services. Global load balancing routes your traffic to your closest service backend and reacts to changes in service reliability or performance to maintain always-on, maximal performance for your users. You can think of global load-balancing services as systems that load balance between your application stamps or endpoints or scale-units hosted across different regions/geographies.

- **Regional load-balancing services** such as Standard Load Balancer or Application Gateway provide the ability to distribute traffic within virtual networks (VNETs) across your virtual machines (VMs) or zonal and zone-redundant service endpoints within a region. You can think of regional load balancers as systems that load balance between your virtual machines or containers or clusters within a region in a virtual network.

Correspondingly, there is another way of pivoting these different load balancers based on the application workload type that they can handle:

- **HTTP/HTTPS load-balancing services** such as Front Door and Application Gateway are load balancers for web applications, that is, can only cater to HTTP/HTTPS traffic. Front Door and Application Gateway offer key layer 7 benefits like SSL offload, web application firewall, path-based load balancing, session affinity and so on.

- **Non-HTTP/S load-balancing services** such as Traffic Manager and Load Balancer that are recommended for non-web workloads, that is, non-HTTP/HTTPS traffic. Traffic Manager is a DNS-based load-balancing service and so is limited to load balancing only at domain level. Also, being at the DNS level, Traffic Manager cannot perform as fast fail-over as Front Door because of common challenges around DNS caching and systems not honoring DNS TTLs. Azure Load Balancer operates for TCP and UDP traffic and offers in-region load balancing with low latency and high throughput for TCP/UDP workloads.

Combining global and regional services in your application provides an end-to-end reliable, performant, and secure way to route traffic to and from your users to your IaaS, PaaS, or on-premises services. In the next section, we describe each of these services in a nutshell to understand the key differences.

## Load-balancing services in Azure

Here are the main load-balancing services currently available in Azure:

- [Application Gateway](https://docs.microsoft.com/azure/application-gateway/overview) provides application delivery controller (ADC) as a service, offering various Layer 7 load-balancing capabilities for your application. It allows customers to optimize web farm productivity by offloading CPU-intensive SSL termination to the application gateway.
- [Azure Front Door Service](https://docs.microsoft.com/azure/frontdoor/front-door-overview) is an application delivery network providing global load balancing and site acceleration service for web applications. It offers various Layer 7 capabilities for your application like SSL offload, path-based routing, fast failover, caching, etc. to improve performance and high-availability of your applications.
- [Azure Load Balancer](https://docs.microsoft.com/azure/load-balancer/load-balancer-overview) is an integral part of the Azure SDN stack, providing high-performance, low-latency Layer 4 load-balancing services for all UDP and TCP protocols.
- [Traffic Manager](https://docs.microsoft.com/azure/traffic-manager/traffic-manager-overview) is a DNS-based traffic load balancer that enables you to distribute traffic optimally to services across global Azure regions, while providing high availability and responsiveness.

When selecting the load-balancing options, here are some factors to consider:

- **Traffic type**: Is it a web (HTTP/HTTPS) application? Is it public facing or a private application?
- **Global vs. regional**: Are you looking at load balancing your VMs or containers within a VNET, or your scale unit/deployments across regions, or both?
- **Scalability**: How does the service handle adding or removing instances? Can it autoscale based on load and other metrics?
- **Availability** What is the service SLA?
- **Cost** In addition to the cost of the service itself, consider the operations cost for managing a solution built on that service. For example, IaaS solutions might have a higher operations cost.
- What are the overall limitations of each service?
- What kind of application architectures are appropriate for this service?

## Next steps

To help select the right set of load-balancing services for your application, use the [Decision tree for load balancing applications](./load-balancing-decision-tree.md)
6 changes: 6 additions & 0 deletions docs/toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,12 @@ items:
href: guide/technology-choices/compute-decision-tree.md
- name: Compute comparison
href: guide/technology-choices/compute-comparison.md
- name: Choosing a load balancing service
items:
- name: Overview
href: guide/technology-choices/load-balancing-overview.md
- name: Decision tree
href: guide/technology-choices/load-balancing-decision-tree.md
- name: Best Practices
items:
- name: API design
Expand Down

0 comments on commit 3a4c65d

Please sign in to comment.