Azure Virtual Machines

Platform as a service (PaaS) is an attractive option for a certain category of workloads. However, not every solution can, or should, fit within the PaaS model. Some workloads require near-total control over the infrastructure: operating system configuration, disk persistence, the ability to install and configure traditional server software, and so on. This is where infrastructure as a service (IaaS) and Azure Virtual Machines come into the picture.

What is Azure Virtual Machines?

Azure Virtual Machines is one of the central features of Azure's IaaS capabilities, together with Azure Virtual Networks. Azure Virtual Machines supports the deployment of Windows or Linux virtual machines (VMs) in a Microsoft Azure datacenter. You have total control over the configuration of the VM. You are responsible for all server software installation, configuration, and maintenance and for operating system patches.

Note The terminology used to describe the Azure Virtual Machines feature and a virtual machine instance can be a little confusing. Therefore, throughout this chapter, Azure Virtual Machines will refer to the feature, while virtual machine or VM will refer to an instance of an actual compute node.

There are two primary differences between Azure's PaaS and IaaS compute features: persistence and control. As discussed in Chapter 2, "Azure App Service and Web Apps," PaaS features such as Cloud Services (that is, web and worker roles) and App Services are managed primarily by the Azure platform, allowing you to focus on creating the application and not managing the server infrastructure. With an Azure Virtual Machines VM, you are responsible for nearly all aspects of the VM.

Azure Virtual Machines supports two types of durable (or persistent) disks: OS disks and data disks. An OS disk is required, and data disks are optional. The durability for the disks is provided by Azure Storage. More details on these disks will be provided later in this chapter, but for now understand the OS disk is where the operating system resides (Windows or Linux), and the data disk is where you can

store other things, such as application data, images, and so on. By contrast, Azure PaaS cloud services use ephemeral disks attached to the physical host—the data on which can be lost in the event of failure of the physical host.

Because of the level of control afforded to the user and the use of durable disks, VMs are ideal for a wide range of server workloads that do not fit into a PaaS model. Server workloads such as database servers (SQL Server, Oracle, MongoDB, and so on), Windows Server Active Directory, Microsoft SharePoint, and many more become possible to run on the Microsoft Azure platform. If desired, users can move such workloads from an on-premises datacenter to one or more Azure regions, a process often called lift and shift.


Azure Virtual Machines is priced on a per-hour basis, but it is billed on a per-minute basis. For example, you are only changed for 23 minutes of usage if the VM is deployed for 23 minutes. The cost for a VM includes the charge for the Windows operating system. Linux-based instances are slightly cheaper because there is no operating system license charge. The cost, and the appropriate licensing, for any additional software you install is your responsibility. Some VM images, such as Microsoft SQL Server, you acquire from the Azure Marketplace may include an additional license cost (on top of the base cost of the VM).

There is a direct relationship between the VM's status and billing:

  • Running. The VM is on and running normally (billable).
  • Stopped. The VM is stopped but still deployed to a physical host (billable)
  • Stopped (Deallocated). The VM is not deployed to a physical host (not billable).

You are charged separately for the durable storage the VM uses. The status of the VM has no relation to the storage charges that will be incurred; even if the VM is stopped/deallocated and you aren't billed for the running VM, you will be charged for the storage used by the disks.

By default, stopping a VM in the Azure portal puts the VM into a Stopped (Deallocated) state. If you want to stop the VM but keep it allocated, you will need to use a PowerShell cmdlet or Azure command-line interface (CLI) command.

Stopping an Azure VM

To stop a VM but keep it provisioned, you would need to use the Stop-AzureRmVM PowerShell cmdlet such as in the following example:

Stop-AzureRmVM -Name "AzEssentialDev3" -ResourceGroup "AzureEssentials" -StayProvisioned 

For classic VMs, a similar cmdlet, Stop-AzureVM, would be used.

When using the Azure CLI, there are two commands used to control the stopped state of a VM: azure vm stop and azure vm deallocate.

Shutting down the VM from the operating system of the VM will also stop the VM but will not deallocate the VM.

Note The Azure Hybrid Use Benefit program may offer additional savings by allowing you bring your on-premises Windows Server licenses to Azure. For more information, please see

Service level agreement

As of the time of this writing, Microsoft offers a 99.95 percent connectivity service level agreement (SLA) for multiple-instance VMs deployed in an availability set. That means that for the SLA to apply, there must be at least two instances of the VM deployed within an availability set. Additional details pertaining to availability sets for Azure Virtual Machines are discussed later in this chapter.

See Also     See the SLA at for full details.

Virtual machine models

As you may recall from earlier in this book, there are two models for working with many Azure resources: Azure Resource Manager (ARM) and Azure Service Management (often referred to as the classic model or ASM). Please see Chapter 1, "Getting started with Microsoft Azure," for a more detailed overview. It is recommended that you use the Resource Manager model for new deployments. The classic model is still supported; however, the newest innovations will be made available only for the Resource Manager model.

For the purposes of this chapter, both models are covered, but the emphasis is on the Resource Manager model.

There are significant and fundamental differences in working with Azure Virtual Machines in these models.

Azure Resource Manager model

When working with the Resource Manager model, you have explicit and fine-grained control over nearly all aspects of the Azure VM. You will explicitly add components such as a network interface card (NIC), public IP address, data disks, load balancer, and much more.

You may recall that Resource Manager uses various resource providers to enable access to and management of Azure resources. There are three main resource providers used when working with Azure Virtual Machines: Network, Storage, and Compute.

  • The Network resource provider (Microsoft.Network) handles all aspects of network connectivity such as IP addresses, load balancers, NICs, and so on.
  • The Storage resource provider (Microsoft.Storage) handles the storage of the disks for a VM (in the context of Azure Virtual Machines).
  • The Compute resource provider (Microsoft.Compute) handles details related to the VM itself, such as naming, operating system details, and configuration (size, number of disks, and so on).

In addition to explicit control over the virtual machine's components, you have the ability to take advantage of other Resource Manager features, such as:

  • Deployment and management of related resources as part of a resource group
  • Tags to logically organize and identify resources
  • Role Based Access Control (RBAC) to apply necessary security and control policies
  • Declarative template files
  • Deployment policies to enforce specific organizational rules
  • Consistent, orchestrated deployment process

This ability affords you a great deal of control in configuring the environment to your exact needs.

Classic/Azure Service Management model

In the classic deployment model, VM deployments are always in the context of an Azure cloud service—a container for VMs. The container provides several key features, including a DNS endpoint, network connectivity (including from the public Internet if desired), security, and a unit of management. While you get these things for free—because they're inherited from the cloud service model—you have limited control over them.

Use of the classic model also excludes the use of the additional value adding features available via Azure Resource Manager (tags, template files, and so on).

Virtual machine components

Like a car, there are many components that make up a virtual machine. Also like a car, there are multiple configuration options available to suit the specific functional needs and desires of the owner.

The sections that follow describe several of the critical components of Azure Virtual Machines. Additionally, more advanced configuration options will be discussed later in the chapter. But first, the base model needs to be established.

Virtual machine

It is sometimes helpful to think of an Azure VM as a logical construct. A virtual machine can be defined as having a status, a specific configuration (operating system, CPU cores, memory, disks, IP address, and so on), and state. That logical definition can be instantiated by Azure, and the appropriate resources can be allocated to bring that VM to life.


Azure VMs use attached VHDs to provide durable storage. There are two types of VHDs used in Azure Virtual Machines:

  • Image. A VHD that is a template for the creation of a new Azure VM. As a template, it does not have settings such as a machine name, administrative user, and so on. More information on creating and using images is provided later in this chapter.
  • Disk. A possibly bootable VHD that can be used as a mountable disk for a VM. There are two types of disks: an OS disk and a data disk.

All durable disks (the OS disk and data disks) are backed by page blobs in Azure Storage. Therefore, the disks inherit the benefits of blob storage: high availability, durability, and geo-redundancy options. Blob storage provides a mechanism by which data can be stored safely for use by the VM. The disks can be mounted as drives on the VM. The Azure platform will hold an infinite lease on the page blob to prevent accidental deletion of the page blob containing the VHD, the related container, or the storage account.

Standard and Premium Storage

The disk files (.vhd files) can be backed by either Standard or Premium Storage accounts in Azure. Azure Premium Storage leverages solid-state disks (SSDs) to enable high performance and low latency for VMs running I/O-intensive workloads. Standard storage is available for all VM sizes, while Premium storage is available for DS, DSv2, F, and GS-series VMs only. Standard storage can also be used with DS, DSv2, F, and GS-series VMs, in which case only the local, ephemeral drive runs on an SSD.

In general, it is recommended to use Azure Premium Storage for production workloads, especially those that are sensitive to performance variations or are I/O intensive. For development or test workloads, which are often not sensitive to performance variations and are not I/O intensive, Azure Standard Storage is generally recommended.

For a thorough review of Azure Premium Storage and implications for Azure VMs, please see Chapter 4, "Azure Storage," and reference

An OS disk is used precisely as the name suggests: for the operating system. For a Windows VM, the OS disk is the typical C drive; this is where Windows places its data. For a Linux VM, it hosts the

/dev/sda1 partition used for the root directory. The maximum size for an OS disk is currently 1,023 GB.

The other type of disk used in Azure Virtual Machines is a data disk. The data disk is also used precisely as the name would suggest: for storing a wide range of data. The maximum size for a data disk is also 1,023 GB. Multiple data disks can be attached to an Azure VM, although the maximum number varies by VM size—typically two disks per CPU. The data disks are often used for storing application data, such as data belonging to your custom application, or server software, such as Microsoft SQL Server and the related data and log files. Multiple data disks can be made into a disk array using Storage Spaces on Windows or mdadm on Linux.

See Also For a breakdown of the various VM sizes, including CPU cores, memory, maximum data disks, and so on, please see

Azure Virtual Machines also include a temporary disk on the physical host that is not persisted to Azure Storage. The temporary disk is a physical disk located within the chassis of the server. Depending on the type of VM created, the temporary disk may be either a traditional HDD platter or an SSD. The temporary disk should be used only for temporary (or replicated) data because the data will be lost in the event of a failure of the physical host or when the VM is stopped/deallocated. Figure 3-1 shows the various disk types.

Figure 3. Disk types in Azure Virtual Machines.

Virtual Network

In an on-premises physical infrastructure, you may have many components that all allow you to operate your virtual machines in a scalable and secure manner. These components could include equipment such as separate network spaces for Internet-facing and backend servers, load balancers, firewalls, and more. Many of these components can logically be deployed in an Azure Virtual Network (often referred to as VNET). Azure Virtual Network provides many similar features, such as the following:

  • Subnet. A subnet is a range of IP addresses within a virtual network. A VM must be placed in a subnet within the VNET. VMs placed in one subnet of a VNET can freely communicate with VMs in another subnet of the same virtual network. However, you can use network security groups (NSGs) and user-defined routes to control such communication.
  • IP address. An IP address can be either public or private. Public IP addresses allow communication from the Internet to the VM. A public IP address can be allocated dynamically— that is, created only when the associated resource (such as a VM or load balancer) is started and released when said resource is stopped—or statically, in which case the IP address is assigned immediately and persists until deleted. Private IP addresses are non–Internet routable addresses used for communication with VMs and load balancers in the same VNET.
  • Load balancer. VMs are exposed to the Internet or other VMs in a VNET by using Azure load balancers. There are two types of load balancers:
  • External load balancer. Used for exposing multiple VMs to the Internet in highly available manner.
  • Internal load balancer. Used for exposing multiple VMs to other VMs in the same VNET in a highly available manner.
  • Network security group. A NSG allows you to create rules that control (approve or deny) inbound and outbound network traffic to network interface cards (NICs) of a VM or subnets.

When creating a VM in Azure using the Resource Manager model, it is required that the VM be placed within an Azure Virtual Network (VNET). You will decide to use an existing VNET (or create a new one), the subnet to use, the IP address, if there is a load balancer or not, the number of NICs, and how network security is handled, as depicted in Figure 3-2. While it may seem like a lot just to get a VM deployed, these are important aspects to consider for the accessibility and security of the VM.

Figure 3-2 VMs in the Resource Manager model have explicit control over related network components.

Classic VMs can also be placed in an Azure Virtual Network. However, this is not a requirement (as it is with VMs in the Resource Manager model).

For more a more detailed discussion of the Azure Virtual Network feature, please see Chapter 5, "Azure Virtual Networks."

IP address

In the Resource Manager model, by default, a VM does not have an IP address. One must be explicitly granted to a VM via an associated NIC. A VM requires an IP address to support communication with other VMs in the virtual network or the public Internet.

Each NIC has an associated private address (often referred to as a DIP, or dynamic IP) used to connect to the virtual network and is optionally associated with a public IP address connected directly to the public Internet. By default, these dynamic IP addresses are lost when the VM is stopped/deallocated, but both may be declared as static to make them persist unchanged throughout the shutdown/deallocation of the VM. This is useful for VMs that need permanent DIPs, such as Microsoft SQL Server, DNS server VMs, or permanent public IP addresses. Multiple NICs, each with their own DIPs, can be attached to a VM if more than one DIP is needed—for example, to multi-home a VM in multiple subnets.

In the classic model, the story is similar except that NICs and public IP addresses can only exist in the context of a VM—that is, they are not independent resources. Furthermore, in the classic model, it is more usual to have Internet connectivity provided by the Azure Load Balancer rather than through a public IP Address.

Azure Load Balancer

As mentioned previously, the Azure Load Balancer is used to provide a relatively even distribution of network traffic across a set of (often similarly configured or related) VMs. Using the load balancer allows you to have multiple VMs work together—for example, as a collection of web servers in a web farm environment. With a load-balanced set (of VMs), incoming requests are distributed across the available VMs instead of being routed to a single VM.

There are two types of load balancers available in Azure: an external load balancer and an internal load balancer, as depicted in Figure 3-3. The external load balancer is used for distributing traffic from the Internet across one or more VMs. This enables you to expose your application in a highly scalable and highly available manner.

The internal load balancer is used to distribute traffic from within a virtual network across a set of VMs. For example, this could be traffic to a web API or database cluster that should be available only to front-end web servers, not to the public Internet.

Figure 3-3 Use of both an external and an internal load balancer.

In the Resource Manager model, to use a load balancer, several additional items must first be created:

  • Public IP address for the incoming network traffic (for an external load balancer)
  • A pool of backend (private) IP addresses associated to NICs for the VMs
  • Rules to define the mapping of a public port on the load balancer to a port in the backend pool
  • Inbound NAT rules to define the mapping of a public port on the load balancer to a specific VM in the pool
  • Health probes to determine if a VM in the pool is healthy

In the classic model, the external load balancer is provided automatically as part of the cloud service model. All VMs in the cloud service are automatically configured to use the load balancer if they expose a public endpoint. Classic VMs can also use an internal load balancer.

See Also For more details on working with a load balancer in Azure, please see the following:

Network interface card (NIC)

A network interface card (NIC) provides network access to resources in an Azure virtual network. A NIC is a standalone resource, but it must be associated with a VM to provide network access (a NIC by itself is of little value). The maximum number of NICs attached to a VM is dependent on the size of the selected VM.

There are several important points to be aware of when working with NICs and VMs:

  • The IP address for each NIC on a VM must be located in a subnet of the VNET to which the VM belongs.
  • If multiple NICs are assigned to a VM, only the primary NIC can be assigned the public IP address. Each NIC will get assigned a private IP address (assuming the NIC is not the primary NIC and has a public IP address). The NICs can be in different subnets of the VNET.
  • Any NIC on a VM can be associated with a network security group (NSG).

When working with classic VMs, it is not necessary to worry about the NIC configuration because that is handled automatically as part of the cloud service model and cannot exist outside the context of a VM.

See Also For more information on working with VMs with multiple NICs, please see

Network security groups

Network security groups (NSGs) allow you to have fine-grained and explicit control over how network traffic flows into or out of Azure VMs and subnets.

NSGs allow you to shape the network traffic flow in and out of your environment. You create rules based on the source IP address and port and the destination IP address and port. The NSG rules can be applied to a VM and/or a subnet. For a VM, the NSG is associated with the NIC attached to the VM.

For more information on network security groups, please see Chapter 5 and also

Availability set

Azure VMs reside on physical servers hosted within Microsoft's Azure datacenters. As with most physical devices, there is a chance that there could be a failure. If the physical server fails, the Azure VMs hosted on that server will also fail. Should a failure occur, the Azure platform will migrate the VM to a healthy host server on which to reconstitute the VM. This service-healing process could take several minutes. During that time, the application(s) hosted on that VM will not be available.

Besides hardware failures, the VMs could be affected by periodic updates initiated by the Azure platform itself. Microsoft will periodically upgrade the host operating system on which the guest VMs are running (you're still responsible for the operating system patching of the guest VM that you create). During these updates, the VM will be rebooted and thus temporarily unavailable.

To avoid a single point of failure, it is recommended to deploy at least two instances of the VM. In fact, Azure provides an SLA only when two or more VMs are deployed into an availability set. This is a logical feature used to ensure that a group of related VMs are deployed so that they are not all subject to a single point of failure and not all upgraded at the same time during a host operating system upgrade in the datacenter. The first two VMs deployed in an availability set are allocated to two different fault domains, ensuring that a single point of failure will not affect them both simultaneously. Similarly, the first five VMs deployed in an availability set are allocated to five different update domains, minimizing the impact when the Azure platform induces host operating system updates one update domain at a time. VMs placed in an availability set should perform an identical set of functionalities.

The number of fault domains and update domains is different depending on the deployment model— Resource Manager or classic. In the Resource Manager model, you can have up to 3 fault domains and 20 upgrade domains. With the classic model, you can have 2 fault domains and 5 upgrade domains.

Create virtual machines

There are two tiers for Azure Virtual Machines, Basic and Standard. VMs in the Basic tier are well suited for workloads that do not require load balancing or the ability to autoscale. VMs in the Standard tier support all Azure Virtual Machines configurations and features. This tier is recommended for most production scenarios.

The Basic tier contains only a subset of the A-series VM sizes, A0–A4. The Standard tier supports all available VM sizes and series: A-Series, D-Series, Dv2-Series, F-Series, and G-Series. There are also variants of the D, Dv2, F, and G-Series sizes, called DS, DSv2, F, and GS, which support Azure Premium Storage.

Note With the introduction of the F-Series VM sizes, Microsoft announced a new naming standard for VM sizes. Starting with the F-Series and applying to any future VM sizes, a numeric value after the family name will match the number of CPU cores. Additional capabilities, such as premium storage, will be designated by a letter following the CPU core count. For example, Standard_F8s will indicate an F-Series VM supporting premium storage with eight CPU cores (the "s" indicates premium storage support). This new naming standard will not be applied to previously introduced VM sizes.

  • A-Series. The "traditional" sizes that have been around since Azure Virtual Machines was introduced. These are your general-purpose VMs.
  • D-Series. Introduced in September 2014, they feature processors that are 60 percent faster than the A-Series, a higher memory-to-core ratio, and an SSD for the temporary physical disk.
  • Dv2-Series. Introduced in October 2015, the Dv2-Series are the next generation of the D-Series instances. They carry the same memory and disk configuration as the D-Series, yet they are on average 35 percent faster than the D-Series (thanks to the 2.4 GHz Intel® Xeon® E5-2673 v3 [Haswell] processor).
  • G-Series. Introduced in January 2015, the G-Series VMs are intended for your most demanding workloads. The G-Series VMs feature two times more memory and four times more storage than D-Series VMs and also include the latest Intel® Xeon® E5 v3 processors. G-Series VMs also use a SSD for the temporary physical disk.
  • F-Series. Introduced in June 2016, the F-Series VMs provide the same CPU performance (the same 2.4 GHz Intel® Xeon® E5-2673 v3 [Haswell] processor) as the Dv2-Series VMs but at a lower per-hour price. The difference with the F-Series is they feature 2 GB of memory per CPU core and less local SSD space. The F-Series can be an excellent choice for workloads that might not benefit from additional memory or local SSD space.
  • N-Series. Announced in September 2015, the N-Series VMs feature GPU capabilities, powered by NVIDIA. At the time of this writing, N-Series VMs are limited to a private preview.

See Also For the most current Azure Virtual Machines configurations, please see

One of the easiest ways to get started creating Azure VMs is to use the Azure portal.

Create a virtual machine with the Azure portal

If you haven't already done so, log into the Azure portal at At this point, you will need an Azure subscription. If you don't have one, you can sign up for a free trial at

To get started, click New in the navigation section of the site and then the Virtual Machines option in the Marketplace. As can be seen in Figure 3-4, doing so opens the Virtual Machines Marketplace blade, where you can select from a wide range of VM configurations and preconfigured images from Microsoft, Microsoft partners, and ISVs. The images in the Marketplace include official images from Microsoft for Windows-based deployments such as Window Server 2012, Microsoft SharePoint server farms, and more, and select partners such as Red Hat, Canonical, DataStax, Oracle, and many more.

Figure 3. The Virtual Machines Marketplace.

For the purposes of this example, select the Windows Server 2012 R2 Datacenter image. If it isn't immediately listed, you can search for the desired image. On the resulting blade, you can read information about the image, including any operating system updates. You will also have the option to choose a deployment model, either Resource Manager or Classic. For the purposes of this example, choose Resource Manager. Click the Create button to proceed with creating your new VM.

Note As Microsoft and its partners transition to the Resource Manager model, an increasing number of images in the Marketplace are only available via the Resource Manager model.

Next, the Create Virtual Machine blade should open and then extend the first blade to configure basic settings. As you can see in Figure 3-5, on this blade you provide several important details about your new VM:

  • Name. The name of the VM
  • User Name. The administrative user name
  • Password. The password for the administrative user
  • Subscription. The Azure subscription to use if you have more than one
  • Resource Group. Provides a logical container for Azure resources (to help manage resources that are often deployed together)
  • Location. The Azure region where the VM should be placed

When finished with the Basics blade, click the OK button to proceed to the next step to select your VM size. Not all VM sizes are available in all Azure regions. If a size is not available in the selected region, that size option will show as disabled when viewing all the VM sizes.

After selecting the VM size, you'll move to the third configuration blade, as seen in Figure 3-6, to set up features related to storage, networking, monitoring, and availability.

Figure 3-6 Optional configuration settings for a new Azure VM.

Let's walk through several of the important settings in this third blade:

  • Storage. Select the storage medium for the OS disk in the new Azure VM.
    • Disk Type. Select either a Standard (backed by a traditional magnetic HDD) or Premium (backed by SSD) disk.
    • Storage Account. Select the Azure Storage account in which to place the OS disk. This can be a new storage account or an existing storage account.
  • Network. All VMs in the Resource Manager model must be placed within a VNET.
    • Virtual Network. Either select an existing VNET or create a new one. VMs in the same VNET can access one another by default.
    • Subnet. Select the subnet (range of IP address from the VNET) in which to place the VM.
    • Public IP Address. Optionally, chose to create a new public (either dynamic or static) IP address, or select None to not have a publicly accessible IP address for the VM.
    • Network Security Group. Configure a set of inbound and outbound firewall rules that control traffic to and from the VM. Note that the default is set to allow Remote Desktop Protocol (RDP) for Windows and SSH for Linux.
  • Monitoring
    • Diagnostics. Choose to enable or disable diagnostic metrics for the VM. This setting enables the Azure Diagnostics extension that by default persists metric data to an Azure Storage account.
    • Diagnostics Storage Account. Select either an existing Azure Storage account or create a new one to which diagnostic metrics are written.
  • Availability
    • Availability Set. Optionally, select the availability set in which to place the VM. This configuration cannot be changed once the VM is created.

Note Diagnostic data (that is, ETW events, performance counters, Windows and application logs, and so on) can optionally be sent to Azure Event Hubs. It is still necessary to enable the Azure Diagnostics extension – new configuration settings are used to optionally send the data to Azure Event Hubs. For more information, please see

The fourth, and final, step is a review step. Once some basic platform validation is complete, you will see a summary of the VM to be created. Select the OK button to start the deployment process. It may take several minutes before the VM is fully provisioned and ready for use.

Create a virtual machine with a template

As mentioned in Chapter 1, one of the key features in the Resource Manager model is the ability to deploy full solutions, using many Azure services (or resources), in a consistent and easily repeatable manner by using templates. Azure Resource Manager templates (ARM templates) are JSON-structured files that explicitly state each Azure resource to use, related configuration properties, and any necessary dependencies. ARM templates are a great way to deploy solutions in Azure, especially solutions that include multiple resources.

As a simple example, if you want to create a solution that requires two VMs using a public load balancer, you can do that in the Azure portal. In doing so, you will need to create a storage account (or use an existing one), a virtual network, public IP addresses, an availability set, and a NIC for each VM. If you have to do this in a repeatable or automated manner, using the Azure portal may not be an optimal approach (due to risk of introducing human error into the process, speed of moving through a user interface, and so on). An alternative deployment mechanism would be to use an ARM template. The example below demonstrates using both PowerShell and Azure CLI commands to deploy the same template.

Deploying an ARM template via PowerShell

$resourceGroupName = "AzureEssentials2016-VM"  

$location ="centralus"

$templateFilePath = "C:\Projects\azure-quickstart-templates\201-2-vms-loadbalancerlbrules\azuredeploy.json"

$templateParameterFilePath = "C:\Projects\azure-quickstart-templates\201-2-vms-loadbalancerlbrules\azuredeploy.parameters.json"

New-AzureRmResourceGroup -Name $resourceGroupName `

-Location $location

New-AzureRmResourceGroupDeployment -Name "My_2_VMs_with_LB" `

-ResourceGroupName $resourceGroupName `

-TemplateFile $templateFilePath `

-TemplateParameterFile $templateParameterFilePath

Deploying an ARM template via the Azure CLI

azure resource group create –name AzureEssentials2016-VM2 --location centralus 
azure group deployment create AzureEssentials2016-VM3 --template-file "C:\Projects\azurequickstart-templates\201-2-vms-loadbalancer-lbrules\azuredeploy.json" --parameters-file 


You can browse a myriad of Microsoft and community-contributed templates at The same view of the templates, lacking the integrated search capabilities, is available at The template referenced in the example above can be found at

Connecting to a virtual machine

After creating a new VM, one of the common next steps is to connect to the VM. Connectivity can be done by remotely accessing (for example, logging in remotely to) the VM for an interactive session or by configuring network access to allow other programs or services to communicate with the VM.

Remotely access a virtual machine

When creating a Windows VM using the Azure portal, Remote Desktop is enabled by default. This is enabled via an NSG and the automatic configuration of the appropriate inbound security rule, allowing inbound TCP traffic on port 3389 (the default RDP port). To connect to a Windows VM, select the Connect button from the VM blade, as shown in Figure 3-7.

This will initiate a download to your local machine of a preconfigured Remote Desktop (.rdp) file. Open the RDP file and connect to the VM. You will need to provide the administrative user name and password set when initially provisioning the VM.

If a Linux VM was created, the process to connect remotely will be a bit different because you will not connect via Remote Desktop. Instead, you will connect via SSH in the standard way for Linux VMs. If you're connecting from Windows, you will likely use an SSH client such as PuTTY.

Network connectivity

By default, Azure VMs are not able to accept requests from the Internet. To do so, a VM must be configured to permit inbound network traffic.

Note Configuring network connectivity sets rules for how network traffic reaches the VM. It does not have any relation to the firewall (software or similar features) running on the VM itself. You might need to configure the server's firewall to allow traffic on the desired port and protocol.

In the Resource Manager model, a VM has inbound connectivity from the Internet if it either has a public IP address on the associated NIC or is the NAT/load-balanced target of an Azure load balancer. NSGs can further restrain that connectivity. To view the NSG rules for a VM using the Azure portal, you will need to start by examining the network interface in the Settings blade for the VM. From there, you would view the Inbound Security Rules on the NSG. There are several blades to move through when viewing this information in the Azure portal. The path would be as follows:

[Your VM] > Settings > Network Interfaces > [Select the NIC] > Settings (for the selected NIC) > Network Security Group > [Select the Network Security Group] > Settings (for the selected NSG) > Inbound Security Rules

In the end, you should get to a screen that looks like that shown in Figure 3-8.

Figure 3-8 The Inbound Security Rules for an NSG on a VM.

Another approach to viewing the NSG configuration is to use the Get-AzureRmNetworkSecurityGroup PowerShell cmdlet.

When using a load balancer in conjunction with one or more VMs in an availability set, the connectivity from the public Internet to the VM is controlled by inbound NAT rules and load balancing rules, as seen in Figure 3-9. The rules are part of the load balancer resource configuration, not the VM. The load balancer is configured to work with, or target, the specific VM(s).

Figure 3-9 The Inbound NAT Rules for a Load Balancer resource targeting a Resource Manager VM.

For classic Azure VMs, the Azure Load Balancer exposes endpoints for an Azure cloud service. It is the configuration of the Azure Load Balancer that controls how requests from the Internet reach a specific port using a related protocol (such as TCP or UDP) on the VM. This configuration is configuring the Azure Load Balancer to allow traffic from the Internet, creating a mapping between public ports on the Azure Load Balancer and private ports on the VM.

Note NSGs can be applied to both classic VMs and Resource Manager VMs. For the purposes of this scenario on virtual machine connectivity, NSGs are not discussed for classic VMs.

Configuring and managing a virtual machine

Creating an Azure VM is only the beginning. There are several important factors that you should consider to successfully manage the VMs. Factors such as scalability, SLA, disk management, and machine maintenance are all important to consider.

The overall management of the VMs is largely the user's responsibility—you can do pretty much whatever you desire on the VM. Configuration and management of the VM can be done via numerous methods, such as manually via a Remote Desktop connection, remotely using PowerShell or PowerShell DSC (desired state configuration), or VM extensions for popular tools like Chef and Puppet. There is a wide range of choices for configuring the VM—the choice is yours.

See Also Unfortunately, all VM configuration options and approaches cannot be covered in this book. Please reference the Azure Virtual Machines documentation at for additional detailed information.


As mentioned earlier in this chapter, Azure VMs have two types of disks: an OS disk and a data disk. These disks are durable (or persistent) disks backed by page blobs in Azure Storage. You have several options on for configuring and using the disks for your VM.

Azure Storage uses page blobs to store the VHDs. For VMs that use Standard storage, the VHD is stored in a sparse format. This means that Azure Storage charges apply only for data within the VHD that has actually been written. Because of this, it is recommended that you use a quick format when formatting the disks. A quick format will avoid storing large ranges of zeros with the page blob, thus conserving actual storage space and saving you money. However, if the VM uses Premium storage, you are charged for the full disk size. Meaning, if you attach a P20 disk (which has a size of 512 GB) to a VM and allocate 300 GB for the drive, you are charged the full price for the P20 disk (not just the space used or allocated). Therefore, it is usually wise to allocate the full size for the drive because you're charged for it anyway

Disk caching

Azure Virtual Machines has the ability to cache access to OS and data disks. Caching potentially can reduce transactions to Azure Storage and can improve performance for certain workloads. There are three disk cache options: Read/Write, Read Only, and None.

The OS disk has two cache options: Read/Write (default) and Read Only.

The data disk has three cache options: Read/Write, Read Only, and None (default).

You should thoroughly test the disk caching configuration for your workload to ensure it meets your performance objectives.

Attach a disk

To add a data disk to a VM, you can start with a new, empty disk or upload an existing VHD. Either can be done using the Azure portal (or using Azure PowerShell or the Azure CLI).

By browsing to the Disks options in the Settings menu, as seen in Figure 3-10, you can view all the OS and data disks that are attached to the current VM. This view also allows you to see the disk type (Standard or Premium), size, estimated performance, and cache setting.

To create and attach a new disk, first click the Disks options in the Settings menu to open the Disks blade. On this blade, you will be able to attach a new disk or attach an existing disk.

To attach a new disk, click Attach New. From the resulting Attach New Disk blade, as seen in Figure 311, you will be able to provide several key settings:

  • Name. Provide your own or accept the default.
  • Type. A disk backed by either Azure Standard Storage or Azure Premium Storage.
  • Size. The size of the new data disk (VHD).
  • Location. The Azure Storage account and blob container that will store your new data disk. You can either select an existing storage account and container or create a new storage account.
  • Host Caching. The cache option to use for the data disk.

To attach an existing data disk, click Attach Existing on the Disks blade. The resulting Attach Existing Disk blade will present an option to select an existing VHD from your Azure Storage account, as you can see in Figure 3-12. You can use your favorite Azure Storage management tool to upload an existing VHD to a blob container in the desired storage account (be sure that VHD is set as a page blob and not a block blob).

Formatting disks

Once the data disks are attached to the Azure VM, each data disk needs to be formatted (or initialized), just like a disk on a physical server. Because Standard storage disks are billed only for the occupied space, it is recommended that you use a quick format when formatting the disks. A quick format will avoid storing large ranges of zeros with the page blob, thus conserving actual storage space and saving you money.

To format the disk(s), remotely connect to the VM. For a Windows VM, once you are connected and logged into the VM, open Disk Management. Disk Management is a native Windows application that allows you to view the disks and format any unallocated disks. As can be seen in Figure 3-13, proceed by right-clicking the unallocated disk and selecting Initialize Disk.

Complete the wizard to initialize the disk. Once the disk has been initialized, you can proceed with formatting the disk.

  1. Right-click the disk and select New Simple Volume. The New Simple Volume Wizard should open.
  2. Continue through the wizard, selecting the desired volume size and drive letter.
  3. When presented with an option to format the volume, be sure to select Perform A Quick Format.
  4. Finish the steps in the wizard to start formatting the disk.

See Also For a step-by-step walkthrough on initializing data disks for a Linux VM, please see

Disk performance

Another factor to be aware of with Azure VM disks is IOPS. At the time of this writing, each data disk backed by Azure Standard Storage has a maximum of 500 IOPS and 60 MB/s (for Standard-tier VMs). For Azure VMs backed by Azure Premium Storage (that is DS, DSv2, F, and GS-series VMs), there is currently a maximum of 5,000 IOPS and 200 MB/s per disk, depending on the specific tier of Azure Premium Storage used. This might or might not be sufficient for the desired workload. You should conduct performance tests to ensure the disk performance is sufficient. If it is not, consider adding disks and creating a disk array via Storage Spaces on Windows or mdadm on Linux. Because Azure Storage keeps three copies of all data, it is only necessary to use RAID 0.

See Also For more information on advanced configuration of Azure VM disks, including striping and Storage Spaces, please review the Microsoft Azure whitepaper available at Although the referenced whitepaper is specific to running SQL Server on an Azure VM, the disk configuration details are common across a multitude of workloads.

Fault domains and update domains

For Resource Manager VMs, you can view the update and fault domains by looking at the Availability Set resource associated with the VMs, as seen in Figure 3-14.

Figure 3-14 Update and fault domains for Resource Manager VMs.

If there is an existing availability set, the VM can be placed within the availability set as part of the VM provisioning process. If there is not an existing availability set, one will need to be created.

Note At the time of this writing, for Resource Manager VMs, the VM must be added to the desired availability set at the time the VM is created. The VM cannot be added to the availability set at a later time.

You can view the update and fault domains used for your classic VMs by looking at the related Cloud Service (Classic) in the Azure portal. As seen in Figure 3-15, the first five VMs are each placed in a different update domain, and the sixth VM is placed in update domain 0.

Figure 3-15 VMs, update domains, and fault domains for classic VMs.

A similar view can be found in the Azure classic portal, as shown in Figure 3-16.

Figure 3-16 VMs, update domains, and fault domains for classic VMs in the Azure classic portal.

Image capture

Once you have your new Azure VM configured as you would like it, you might want to create a clone of the VM. For example, you might want to create several more VMs using the one you just created as a template. You do this by capturing the VM and creating a generalized VM image. When you create a VM image, you capture not only the OS disk, but also any attached data disks.

When you capture the VM to use it as a template for future VMs, you will no longer be able to use the original VM (the original source) because it is deleted after the capture is completed. For classic VMs, you will find a template image available for use in your Virtual Machine gallery in the Azure classic portal. As of this writing, there is no view available in the Azure portal for viewing images related to Resource Manager VMs. Instead, you will need look for the image in the same storage account as the original VM (most often the image will be stored at a path similar to https://[storage_account][container_name]/[t emplate_prefix]-osDisk.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.vhd.

Capture a Windows VM in the Resource Manager model

To capture a Windows VM in the Resource Manager model, you will use Azure PowerShell, the Azure CLI, or the Azure Resource Explorer tool. Capturing a VM is not yet possible in the Azure portal. To capture a Windows VM, complete the following steps:

  1. Connect to the VM using Remote Desktop (as discussed earlier in this chapter).
  2. Open a command prompt window as the administrator.
  3. Navigate to the %windir%/system32/sysprep directory and then run Sysprep.exe.
  4. In the System Preparation Tool, perform the following actions:
    1. From the System Cleanup Action list, select Enter System Out-Of-Box Experience (OOBE).
    2. Select the Generalize check box.
    3. In the Shutdown Options drop-down list, select Shutdown.
  5. The VM will run sysprep. If you are still connected to the VM via RDP, you will be disconnected when it begins to shut down. Watch the VM in the Azure portal until it completely shuts down and shows a status of Stopped.
  6. Open PowerShell and log into your Azure account using the Login-AzureRMAccount cmdlet. Optionally, select the necessary Azure subscription using the Select-AzureRMSubscription cmdlet.
  7. Stop and deallocate the VM's resources by using the Stop-AzureRmVM cmdlet, as seen in the example below. The VM's status will change from Stopped to Stopped (Deallocated).
Stop-AzureRmVM -ResourceGroupName AzureEssentials2-vm -Name ezazure3 
  1. Set the status of the VM to Generalized by using the Set-AzureRmVM cmdlet, as seen in the example below.
Set-AzureRmVM -ResourceGroupName AzureEssentials2-vm -Name ezazure3 -Generalized 

Tip View the VM status using the Get-AzureRmVm cmdlet, as shown below. This will show you a VM generalized status when the previous command is complete. The VM generalized status will not appear in the Azure portal.

(Get-AzureRmVM -ResourceGroupName AzureEssentials2-vm -Name ezazure3 Status).Statuses 
  1. Capture the VM, placing the image in an Azure Blob storage container folder, by executing the Save-AzureRmVMImage cmdlet, as seen in the example below. Note that the value of the DestinationContainerName parameter is not a top-level blob container, but a folder under the System container, as can be seen in Figure 3-17. You can also see the full path to the image file by looking in the saved JSON file under the resource\storageProfile\osDisk\image\uri location.
Save-AzureRmVMImage -ResourceGroupName AzureEssentials2-vm -VMName ezazure3 -DestinationContainerName myimages -VHDNamePrefix ezvm -Path C:\temp\imagetemplate.json 

Figure 3-17 The VHD associated with the saved VM image.

Note The saved JSON file is a valid ARM template file that can be used to create a new Azure VM based on the saved image. You will need to add any additional required components, such as an NIC.

With the image safely stored in Azure Storage, you can use this image as the basis for new Azure VMs. To do so, you would use the Set-AzureRmVMOSDisk cmdlet, specifying the path to the saved VHD in the SourceImageUri parameter. Keep in mind that the image and the OS disk must be in the same storage account. If they are not, you will need to copy the image VHD to the desired storage account. A full example can be seen below (replace with your values as appropriate).

Creating a new Azure VM from a captured VM image

$resourceGroupName = "EZAzureVM-2016" 
$location = "centralus" 
$capturedImageStorageAccount = "azureessentials2vm4962" 
= vm-osDisk.c55c8313-adf0-4517-8830-040c402379ab.vhd 
$catpuredImageStorageAccountResourceGroup = "AzureEssentials2-vm"  
# Create the new resource group. 
New-AzureRmResourceGroup -Name $resourceGroupName -Location $location  
# !!!! This example assumes the new VM is in a different resource group and storage account from the captured VM. !!!! 
$srcKey = Get-AzureRmStorageAccountKey -StorageAccountName $capturedImageStorageAccount -ResourceGroupName $catpuredImageStorageAccountResourceGroup  
$srcContext = New-AzureStorageContext -StorageAccountName $capturedImageStorageAccount -StorageAccountKey $srcKey.Key1 
# **** Create the Network Resources ****  
$publicIp = New-AzureRmPublicIpAddress -Name "MyPublicIp01" `                 -ResourceGroupName $resourceGroupName ` 
                -Location $location -AllocationMethod Dynamic  
$subnetConfiguration = New-AzureRmVirtualNetworkSubnetConfig -Name "MySubnet" ` 
                           -AddressPrefix ""  
$virtualNetworkConfiguration = New-AzureRmVirtualNetwork -Name "MyVNET" ` 
                                   -ResourceGroupName $resourceGroupName ` 
                                   -Location $location ` 
                                   -AddressPrefix "" ` 
                                   -Subnet $subnetConfiguration  
$nic = New-AzureRmNetworkInterface -Name "MyServerNIC01" ` 
          -ResourceGroupName $resourceGroupName ` 
          -Location $location ` 
          -SubnetId $virtualNetworkConfiguration.Subnets[0].Id ` 
          -PublicIpAddressId $publicIp.Id 
# ****  Create the new Azure VM **** 
# Get the admin credentials for the new VM  
$adminCredential = Get-Credential 
# Get the storage account for the captured VM image 
$storageAccount = New-AzureRmStorageAccount -ResourceGroupName $resourceGroupName -Name 
"ezazurevm2016" -Location $location -Type Standard_LRS 
# Copy the captured image from the source storage account to the destination storage account 
$destImageName = $capturedImageUri.Substring($capturedImageUri.LastIndexOf('/') + 1)  
New-AzureStorageContainer -Name "images" -Context $storageAccount.Context 
Start-AzureStorageBlobCopy -AbsoluteUri $capturedImageUri -DestContainer "images" -DestBlob 
$destImageName -DestContext $storageAccount.Context -Context $srcContext -Verbose -Debug 
Get-AzureStorageBlobCopyState -Context $storageAccount.Context -Container "images" -Blob 
$destImageName -WaitForComplete 
# Build the URI for the image in the new storage account 
$imageUri = '{0}images/{1}' -f $storageAccount.PrimaryEndpoints.Blob.ToString(), $destImageName 
# Set the VM configuration details 
$vmConfig = New-AzureRmVMConfig -VMName "ezazurevm10" -VMSize "Standard_D1"  
# Set the operating system details 
$vm = Set-AzureRmVMOperatingSystem -VM $vmConfig -Windows -ComputerName $vmConfig.Name -Credential $adminCredential -TimeZone "Eastern Standard Time" -ProvisionVMAgent -EnableAutoUpdate 
# Set the NIC 
$vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $nic.Id  
# Create the OS disk URI 
$osDiskUri = '{0}vhds/{1}_{2}.vhd' -f $storageAccount.PrimaryEndpoints.Blob.ToString(), 
$vm.Name.ToLower(), ($vm.Name + "_OSDisk")  
# Configure the OS disk to use the previously saved image 
$vm = Set-AzureRmVMOSDisk -vm $vm -Name $vm.Name -VhdUri $osDiskUri -CreateOption FromImage -SourceImageUri $imageUri -Windows 
# Create the VM 
New-AzureRmVM -ResourceGroupName $resourceGroupName -Location $location -VM $vm 

For Linux VMs, the capture process is similar. Although you can use PowerShell to capture the VM, a common approach is to use the Azure CLI. You would use three basic Azure CLI commands:

azure vm stop -g <resource group name> -n <vm name> azure vm generalize -g <resource group name> -n <vm name> azure vm capture <resource group name> <vm name> <vhd prefix> -t <template file name> 

See Also For a detailed walkthrough of using the Azure CLI to capture a Linux VM, please see

As an alternative to using PowerShell or the Azure CLI, you can use the Azure Resource Explorer tool available at This tool allows you to work against the Azure Resource Manager (ARM) native REST APIs in a user-friendly manner. After signing into your Azure account and setting the tool to Read/Write mode to allow PUT, POST, and DELETE operations (default is Read Only, allowing GET operations), you will need to find the VM you want to capture. Once you've located the VM, go the tab for Actions (POST/DELETE). There, you will find options, as seen in Figure 3-18, to deallocate, generalize, and capture the VM. Capturing the VM will create the VHD for the image and the JSON template file, just as executing the Save-AzureRmVMImage cmdlet or azure vm capture command would.

Figure 3-18 Capture a VM using the Azure Resource Explorer tool.

For more information, including details on using the Azure Resource Explorer tool, please refer to the tutorial available at

Capture a Windows VM in the classic model

Similarly, in the classic model, there are several steps you will need to follow to capture a VM so it is available for use as a template image. The majority of the steps are the same as in the Resource Manager model. Once the VM has executed the sysprep process (or Linux equivalent), you will be able to initiate the capture process from within the Azure classic portal. Once the capture process is complete, the image will appear in your Virtual Machine gallery, under My Images. You can now use this image to create a new VM instance

Scaling Azure Virtual Machines

As with most Azure services, Azure Virtual Machines follow a scale out, not scale up, model. This means it is preferable to deploy more instances of the same configuration than to add larger, more powerful machines. The approach for scaling out VMs varies depending on whether you're working with classic VMs or Resource Manager VMs.

Resource Manager virtual machines

In the Resource Manager model, you don't (typically) scale out VMs in an automated way—at least not how you would with VMs in the classic model. Instead, a different Azure resource construct is used for scaling out VMs: Azure Virtual Machine Scale Sets (often abbreviated as VMSS).

Virtual Machine Scale Sets are a relatively new Azure compute option for deploying and managing a set of identical VMs. You configure all VMs in a scale set in an identical manner. You configure the VM image to be used (operating system configuration, software installed on the VM, and so on) and let Azure provision the desired number of identical VMs (based on the provided image). The VMs in a scale set can run either a Windows or a Linux operating system. Scaling with VMSS does not require the preprovisioning of VMs within an availability set (like autoscale for classic VMs does). At the time of this writing, you can have up to 100 VMs in a VM scale set.

It should be noted that when working with VMSS, there is no data disk available (as you may have with a regular Azure VM). Data should be stored on either the OS disk or an external data store such as Azure Table, File, or Blob storage; Azure SQL Database; Azure DocumentDB, and so on.

VMSS can be provisioned either via the Azure portal or ARM templates. Using the ARM template approach for working with VMSS is likely to be the most common approach because doing so offers many more features than are currently available in the Azure portal. For instance, you can configure autoscale rules relatively easily using the ARM template. Such configuration is not yet available within the Azure portal.

Once the VMSS is created, as can be seen in Figure 3-19, you can see that it contains several familiar constructs, such as a load balancer, virtual network, IP address, and multiple Azure Storage accounts.

Figure 3-19 A resource group containing assets related to a new VMSS.

VMSS are the preferred way to implement a scale-out compute cluster in Azure. In fact, Azure uses VMSS to host higher-level services such as Azure Batch, Azure Service Fabric, and Azure Container Service.

See Also

Classic virtual machines

In the classic model, before VMs can be scaled (out or in), the instances must be placed within an availability set. When determining the scale-out approach for VMs, it is important to determine the maximum number of VMs because that maximum number of VMs must be created, configured, and placed into the availability set. When it comes time to scale out, the VMs within the availability set are used to fulfill the scale-out needs. VMs within an availability set should all be the same size to take advantage of Azure's autoscale feature.