diff --git a/docs/cloud-adoption/migrate/azure-best-practices/contoso-migration-scale.md b/docs/cloud-adoption/migrate/azure-best-practices/contoso-migration-scale.md index d2a301080a4..77b43d2f3fd 100644 --- a/docs/cloud-adoption/migrate/azure-best-practices/contoso-migration-scale.md +++ b/docs/cloud-adoption/migrate/azure-best-practices/contoso-migration-scale.md @@ -280,7 +280,7 @@ DMS isn't the only Microsoft database migration tool. Get a [comparison of tools Contoso will use DMS when migrating from SQL Server. -- When provisioning DMS, Contoso needs to ensure that it's sized correctly, and set to optimize performance for data migrations. Contoso will select the "business-critical tier with 4 vCores" option, thus allowing the service to take advantage of multiple vCPUs for parallelization and faster data transfer. +- When provisioning DMS, Contoso needs to size it correctly and set it to optimize performance for data migrations. Contoso will select the "business-critical tier with 4 vCores" option, thus allowing the service to take advantage of multiple vCPUs for parallelization and faster data transfer. ![DMS scaling](./media/contoso-migration-scale/dms.png) diff --git a/docs/data-guide/technology-choices/r-developers-guide.md b/docs/data-guide/technology-choices/r-developers-guide.md index b8d17fea623..79c33a87847 100644 --- a/docs/data-guide/technology-choices/r-developers-guide.md +++ b/docs/data-guide/technology-choices/r-developers-guide.md @@ -175,7 +175,7 @@ in the cloud easily and economically. [Azure Notebooks](https://notebooks.azure.com) is a low-cost, low-friction method for R developers who prefer working with notebooks to bring their code to Azure. It is a free service for anyone to develop and run code in their browser using [Jupyter](https://jupyter.org/), which is an open-source project that enables combing markdown prose, executable code, and graphics onto a single canvas. -The free service tier of Azure Notebooks is a viable option for small-scale projects, as it limits each notebook's process to 4GB of memory and 1GB data sets. If you need compute and data power beyond these limitations, however, you can run notebooks in a Data Science Virtual Machine instance. For more information, see [Manage and configure Azure Notebooks projects - Compute tier](/azure/notebooks/configure-manage-azure-notebooks-projects#compute-tier). +The free service tier of Azure Notebooks is a viable option for small-scale projects, as it limits each notebook's process to 4 GB of memory and 1 GB data sets. If you need compute and data power beyond these limitations, however, you can run notebooks in a Data Science Virtual Machine instance. For more information, see [Manage and configure Azure Notebooks projects - Compute tier](/azure/notebooks/configure-manage-azure-notebooks-projects#compute-tier). ## Azure SQL Database diff --git a/docs/example-scenario/apps/sap-dev-test.md b/docs/example-scenario/apps/sap-dev-test.md index 0df87a23860..f064c51a804 100644 --- a/docs/example-scenario/apps/sap-dev-test.md +++ b/docs/example-scenario/apps/sap-dev-test.md @@ -13,7 +13,7 @@ social_image_url: /azure/architecture/example-scenario/apps/media/architecture-s # Dev/test environments for SAP workloads on Azure -This example shows how to establish a dev/test environment for SAP NetWeaver in a Windows or Linux environment on Azure. The database used is AnyDB, the SAP term for any supported DBMS (that isn't SAP HANA). Because this architecture is designed for non-production environments, it's deployed with just a single virtual machine (VM) and it's size can be changed to accommodate your organization's needs. +This example shows how to establish a dev/test environment for SAP NetWeaver in a Windows or Linux environment on Azure. The database used is AnyDB, the SAP term for any supported DBMS (that isn't SAP HANA). Because this architecture is designed for non-production environments, it's deployed with only one virtual machine (VM), and the virtual machine size can be changed to accommodate your organization's needs. For production use cases review the SAP reference architectures available below: @@ -41,11 +41,11 @@ This scenario demonstrates provisioning a single SAP system database and SAP app ### Components -- [Virtual Networks](/azure/virtual-network/virtual-networks-overview) are the basis of network communication within Azure. -- [Virtual Machine](/azure/virtual-machines/windows/overview) Azure Virtual Machines provides on-demand, high-scale, secure, virtualized infrastructure using Windows or Linux Server. -- [ExpressRoute](/azure/expressroute/expressroute-introduction) lets you extend your on-premises networks into the Microsoft cloud over a private connection facilitated by a connectivity provider. -- [Network Security Group](/azure/virtual-network/security-overview) lets you limit network traffic to resources in a virtual network. A network security group contains a list of security rules that allow or deny inbound or outbound network traffic based on source or destination IP address, port, and protocol. -- [Resource Groups](/azure/azure-resource-manager/resource-group-overview#resource-groups) act as logical containers for Azure resources. +- [Virtual networks](/azure/virtual-network/virtual-networks-overview) are the basis of network communication within Azure. +- [Azure Virtual Machines](/azure/virtual-machines/windows/overview) provide on-demand, high-scale, secure, virtualized infrastructure using Windows or Linux servers. +- [ExpressRoute](/azure/expressroute/expressroute-introduction) extends your on-premises networks into the Microsoft cloud over a private connection facilitated by a connectivity provider. +- [Network security groups](/azure/virtual-network/security-overview) limit network traffic to specific resources in a virtual network. A network security group contains a list of security rules that allow or deny inbound or outbound network traffic based on source or destination IP address, port, and protocol. +- [Resource groups](/azure/azure-resource-manager/resource-group-overview#resource-groups) act as logical containers for Azure resources. ## Considerations @@ -81,10 +81,10 @@ Extra Large|64000|M64s|4xP20, 1xP10|[Extra Large](https://azure.com/e/975fb58a96 > [!NOTE] > This pricing is a guide that only indicates the VMs and storage costs. It excludes networking, backup storage, and data ingress/egress charges. -- [Small](https://azure.com/e/9d26b9612da9466bb7a800eab56e71d1): A small system consists of VM type D8s_v3 with 8x vCPUs, 32-GB RAM and 200-GB temp storage, additionally two 512 GB and one 128-GB premium storage disk. -- [Medium](https://azure.com/e/465bd07047d148baab032b2f461550cd): A medium system consists of VM type D16s_v3 with 16x vCPUs, 64-GB RAM and 400-GB temp storage, additionally three 512 GB and one 128-GB premium storage disk. -- [Large](https://azure.com/e/ada2e849d68b41c3839cc976000c6931): A large system consists of VM type E32s_v3 with 32x vCPUs, 256-GB RAM and 512-GB temp storage, additionally three 512GB and one 128-GB premium storage disk. -- [Extra Large](https://azure.com/e/975fb58a965c4fbbb54c5c9179c61cef): An extra large system consists of a VM type M64s with 64x vCPUs, 1024-GB RAM and 2000-GB temp storage, additionally four 512 GB and one 128-GB premium storage disk. +- [Small](https://azure.com/e/9d26b9612da9466bb7a800eab56e71d1): A small system consists of VM type D8s_v3 with 8x vCPUs, 32-GB RAM, and 200 GB of temporary storage, along with two 512 GB and one 128-GB premium storage disk. +- [Medium](https://azure.com/e/465bd07047d148baab032b2f461550cd): A medium system consists of VM type D16s_v3 with 16x vCPUs, 64-GB RAM, and 400 GB of temporary storage, along with three 512-GB and one 128-GB premium storage disk. +- [Large](https://azure.com/e/ada2e849d68b41c3839cc976000c6931): A large system consists of VM type E32s_v3 with 32x vCPUs, 256-GB RAM, and 512-GB of temporary storage, along with three 512-GB and one 128-GB premium storage disk. +- [Extra Large](https://azure.com/e/975fb58a965c4fbbb54c5c9179c61cef): An extra-large system consists of a VM type M64s with 64x vCPUs, 1024-GB RAM, and 2000 GB of temporary storage, along with four 512-GB and one 128-GB premium storage disk. ## Deployment @@ -96,6 +96,7 @@ Click the link below to deploy the solution. > SAP and Oracle are not installed during this deployment. You will need to deploy these components separately. + [resiliency]: /azure/architecture/resiliency/ [security]: /azure/security/ [scalability]: /azure/architecture/checklist/scalability diff --git a/docs/example-scenario/apps/sap-production.md b/docs/example-scenario/apps/sap-production.md index e9485085fc0..c7e630bb944 100644 --- a/docs/example-scenario/apps/sap-production.md +++ b/docs/example-scenario/apps/sap-production.md @@ -87,7 +87,7 @@ Extra Large|250000|M64s|6xP30, 1xP30|DS11_v2|1x P10|10x DS14_v2|1x P10|[Extra La - [Large](https://azure.com/e/f70fccf571e948c4b37d4fecc07cbf42): A large system consists of VM type E32s_v3 for the database server with 32x vCPUs, 256-GB RAM and 800-GB temp storage, additionally three 512 GB and one 128-GB premium storage disks. An SAP Central Instance server using a DS11_v2 VM types with 2x vCPUs 14-GB RAM and 28-GB temp storage. Six VM type DS14_v2 for the SAP application servers with 16x vCPUs, 112 GB RAM, and 224 GB temp storage, additionally six 128-GB premium storage disk. -- [Extra Large](https://azure.com/e/58c636922cf94faf9650f583ff35e97b): An extra large system consists of the M64s VM type for the database server with 64x vCPUs, 1024 GB RAM, and 2000 GB temp storage, additionally seven 1024-GB premium storage disks. An SAP Central Instance server using a DS11_v2 VM types with 2x vCPUs 14-GB RAM and 28-GB temp storage. 10 VM type DS14_v2 for the SAP application servers with 16x vCPUs, 112 GB RAM, and 224 GB temp storage, additionally ten 128-GB premium storage disk. +- [Extra Large](https://azure.com/e/58c636922cf94faf9650f583ff35e97b): An extra-large system consists of the M64s VM type for the database server with 64x vCPUs, 1024 GB RAM, and 2000 GB temp storage, additionally seven 1024-GB premium storage disks. An SAP Central Instance server using a DS11_v2 VM types with 2x vCPUs 14-GB RAM and 28-GB temp storage. 10 VM type DS14_v2 for the SAP application servers with 16x vCPUs, 112 GB RAM, and 224 GB temp storage, additionally ten 128-GB premium storage disk. ## Deployment diff --git a/docs/example-scenario/data/data-warehouse.md b/docs/example-scenario/data/data-warehouse.md index 0fa3491aa48..2deed1c39be 100644 --- a/docs/example-scenario/data/data-warehouse.md +++ b/docs/example-scenario/data/data-warehouse.md @@ -68,7 +68,7 @@ Data is loaded from these different data sources using several Azure components: - Data Factory orchestrates the workflows for your data pipeline. If you want to load data only one time or on demand, you could use tools like SQL Server bulk copy (bcp) and AzCopy to copy data into Blob storage. You can then load the data directly into SQL Data Warehouse using Polybase. - If you have very large datasets, consider using [Data Lake Storage](/azure/storage/data-lake-storage/introduction), which provides limitless storage for analytics data. - An on-premises [SQL Server Parallel Data Warehouse](/sql/analytics-platform-system) appliance can also be used for big data processing. However, operating costs are often much lower with a managed cloud-based solution like SQL Data Warehouse. -- SQL Data Warehouse is not a good fit for OLTP workloads or data sets smaller than 250GB. For those cases you should use Azure SQL Database or SQL Server. +- SQL Data Warehouse is not a good fit for OLTP workloads or data sets smaller than 250 GB. For those cases you should use Azure SQL Database or SQL Server. - For comparisons of other alternatives, see: - [Choosing a data pipeline orchestration technology in Azure](/azure/architecture/data-guide/technology-choices/pipeline-orchestration-data-movement) diff --git a/docs/reference-architectures/data/enterprise-bi-sqldw.md b/docs/reference-architectures/data/enterprise-bi-sqldw.md index 291b9ac63d0..a330c48e9f2 100644 --- a/docs/reference-architectures/data/enterprise-bi-sqldw.md +++ b/docs/reference-architectures/data/enterprise-bi-sqldw.md @@ -105,7 +105,7 @@ Loading the data is a two-step process: **Recommendations:** -Consider SQL Data Warehouse when you have large amounts of data (more than 1 TB) and are running an analytics workload that will benefit from parallelism. SQL Data Warehouse is not a good fit for OLTP workloads or smaller data sets (< 250GB). For data sets less than 250GB, consider Azure SQL Database or SQL Server. For more information, see [Data warehousing](../../data-guide/relational-data/data-warehousing.md). +Consider SQL Data Warehouse when you have large amounts of data (more than 1 TB) and are running an analytics workload that will benefit from parallelism. SQL Data Warehouse is not a good fit for OLTP workloads or smaller data sets (less than 250 GB). For data sets less than 250 GB, consider Azure SQL Database or SQL Server. For more information, see [Data warehousing](../../data-guide/relational-data/data-warehousing.md). Create the staging tables as heap tables, which are not indexed. The queries that create the production tables will result in a full table scan, so there is no reason to index the staging tables.