Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Table of Contents Overview About virtual machines Disks and VHDs Virtual Networks FAQ Compare VMs, websites, and cloud services Containers Get started Create a VM using the portal Log on to a VM Install Azure PowerShell Install Azure CLI How to Use Storage Attach a data disk Detach a data disk Use D: as a data disk Network How to set up endpoints Connect VMs with a VNet or Cloud Service Connect Classic VNets to Resource Manager VNets Create a load balancer Manage NSGs using Azure PowerShell Deploy Create a custom VM Create and configure a VM using Azure PowerShell Capture a Windows VM Create and upload a VHD using PowerShell Automate Azure VM deployment with Chef Create and Manage VMs in Visual Studio Create a VM for a web app with Visual Studio Run a compute-intensive task in Java Django Hello World web application Configure Reset a password or the Remote Desktop service Install and configure Symantec Endpoint Protection Install and configure Trend Micro Deep Security as a Service Configure an availability set Resize a Windows VM created in the classic deployment model Manage Migrate from Classic to Resource Manager Manage your VMs using Azure PowerShell About the VM agent and extensions Manage VM extensions Custom Script extension for VMs Injecting custom data into an Azure VM Plan About images Sizes for VMs Planned maintenance for Azure VMs Azure infrastructure services implementation guidelines Manage workloads High-performance Computing (HPC) MongoDB MySQL Oracle SAP SQL Server Tomcat Troubleshoot Remote Desktop connections Access to an application Classic deployment issues with creating a new VM Classic deployment issues with restarting or resizing an existing VM Reset RDP password Reference PowerShell Azure CLI Java .NET Author Resource Manager templates Community templates Compute REST Network REST Storage REST Resources Pricing Regional availability Stack Overflow Videos 1 min to read • Edit O nline About disks and VHDs for Azure Windows VMs 4/6/2017 • 8 min to read • Edit Online Just like any other computer, virtual machines in Azure use disks as a place to store an operating system, applications, and data. All Azure virtual machines have at least two disks – a Windows operating system disk and a temporary disk. The operating system disk is created from an image, and both the operating system disk and the image are virtual hard disks (VHDs) stored in an Azure storage account. Virtual machines also can have one or more data disks, that are also stored as VHDs. In this article, we will talk about the different uses for the disks, and then discuss the different types of disks you can create and use. This article is also available for Linux virtual machines. NOTE Azure has two different deployment models for creating and working with resources: Resource Manager and classic. This article covers using both models, but Microsoft recommends that most new deployments use the Resource Manager model. Disks used by VMs Let's take a look at how the disks are used by the VMs. Operating system disk Every virtual machine has one attached operating system disk. It's registered as a SATA drive and labeled as the C: drive by default. This disk has a maximum capacity of 1023 gigabytes (GB). Temporary disk Each VM contains a temporary disk. The temporary disk provides short-term storage for applications and processes and is intended to only store data such as page or swap files. Data on the temporary disk may be lost during a maintenance event or when you redeploy a VM. During a standard reboot of the VM, the data on the temporary drive should persist. The temporary disk is labeled as the D: drive by default and it used for storing pagefile.sys. To remap this disk to a different drive letter, see Change the drive letter of the Windows temporary disk. The size of the temporary disk varies, based on the size of the virtual machine. For more information, see Sizes for Windows virtual machines. For more information on how Azure uses the temporary disk, see Understanding the temporary drive on Microsoft Azure Virtual Machines Data disk A data disk is a VHD that's attached to a virtual machine to store application data, or other data you need to keep. Data disks are registered as SCSI drives and are labeled with a letter that you choose. Each data disk has a maximum capacity of 1023 GB. The size of the virtual machine determines how many data disks you can attach to it and the type of storage you can use to host the disks. NOTE For more information about virtual machines capacities, see Sizes for Windows virtual machines. Azure creates an operating system disk when you create a virtual machine from an image. If you use an image that includes data disks, Azure also creates the data disks when it creates the virtual machine. Otherwise, you add data disks after you create the virtual machine. You can add data disks to a virtual machine at any time, by attaching the disk to the virtual machine. You can use a VHD that you've uploaded or copied to your storage account, or one that Azure creates for you. Attaching a data disk associates the VHD file with the VM by placing a 'lease' on the VHD so it can't be deleted from storage while it's still attached. About VHDs The VHDs used in Azure are .vhd files stored as page blobs in a standard or premium storage account in Azure. For details about page blobs, see Understanding block blobs and page blobs. For details about premium storage, see High-performance premium storage and Azure VMs. Azure supports the fixed disk VHD format. The fixed format lays the logical disk out linearly within the file, so that disk offset X is stored at blob offset X. A small footer at the end of the blob describes the properties of the VHD. Often, the fixed format wastes space because most disks have large unused ranges in them. However, Azure stores .vhd files in a sparse format, so you receive the benefits of both the fixed and dynamic disks at the same time. For more details, see Getting started with virtual hard disks. All .vhd files in Azure that you want to use as a source to create disks or images are read-only. When you create a disk or image, Azure makes copies of the .vhd files. These copies can be read-only or read-and-write, depending on how you use the VHD. When you create a virtual machine from an image, Azure creates a disk for the virtual machine that is a copy of the source .vhd file. To protect against accidental deletion, Azure places a lease on any source .vhd file that’s used to create an image, an operating system disk, or a data disk. Before you can delete a source .vhd file, you’ll need to remove the lease by deleting the disk or image. To delete a .vhd file that is being used by a virtual machine as an operating system disk, you can delete the virtual machine, the operating system disk, and the source .vhd file all at once by deleting the virtual machine and deleting all associated disks. However, deleting a .vhd file that’s a source for a data disk requires several steps in a set order. First you detach the disk from the virtual machine, then delete the disk, and then delete the .vhd file. WARNING If you delete a source .vhd file from storage, or delete your storage account, Microsoft can't recover that data for you. Types of disks There are two performance tiers for storage that you can choose from when creating your disks -- Standard Storage and Premium Storage. Also, there are two types of disks -- unmanaged and managed -- and they can reside in either performance tier. Standard storage Standard Storage is backed by HDDs, and delivers cost-effective storage while still being performant. Standard storage can be replicated locally in one datacenter, or be geo-redundant with primary and secondary data centers. For more information about storage replication, please see Azure Storage replication. For more information about using Standard Storage with VM disks, please see Standard Storage and Disks. Premium storage Premium Storage is backed by SSDs, and delivers high-performance, low-latency disk support for VMs running I/O-intensive workloads. You can use Premium Storage with DS, DSv2, GS, or FS series Azure VMs. For more information, please see Premium Storage. Unmanaged disks Unmanaged disks are the traditional type of disks that have been used by VMs. With these, you create your own storage account and specify that storage account when you create the disk. You have to make sure you don't put too many disks in the same storage account, because you could exceed the scalability targets of the storage account (20,000 IOPS, for example), resulting in the VMs being throttled. With unmanaged disks, you have to figure out how to maximize the use of one or more storage accounts to get the best performance out of your VMs. Managed disks Managed Disks handles the storage account creation/management in the background for you, and ensures that you do not have to worry about the scalability limits of the storage account. You simply specify the disk size and the performance tier (Standard/Premium), and Azure creates and manages the disk for you. Even as you add disks or scale the VM up and down, you don't have to worry about the storage being used. You can also manage your custom images in one storage account per Azure region, and use them to create hundreds of VMs in the same subscription. For more information about Managed Disks, please see the Managed Disks Overview. We recommend that you use Azure Managed Disks for new VMs, and that you convert your previous unmanaged disks to managed disks, to take advantage of the many features available in Managed Disks. Disk comparison The following table provides a comparison of Premium vs Standard for both unmanaged and managed disks to help you decide what to use. AZURE PREMIUM DISK AZURE STANDARD DISK Disk Type Solid State Drives (SSD) Hard Disk Drives (HDD) Overview SSD-based high-performance, lowlatency disk support for VMs running IO-intensive workloads or hosting mission critical production environment HDD-based cost effective disk support for Dev/Test VM scenarios Scenario Production and performance sensitive workloads Dev/Test, non-critical, Infrequent access Disk Size P10: 128 GB P20: 512 GB P30: 1024 GB Unmanaged Disks: 1 GB – 1 TB Max Throughput per Disk 200 MB/s 60 MB/s Max IOPS per Disk 5000 IOPS 500 IOPS Managed Disks: S4: 32 GB S6: 64 GB S10: 128 GB S20: 512 GB S30: 1024 GB One last recommendation: Use TRIM with unmanaged standard disks If you use unmanaged standard disks (HDD), you should enable TRIM. TRIM discards unused blocks on the disk so you are only billed for storage that you are actually using. This can save on costs if you create large files and then delete them. You can run this command to check the TRIM setting. Open a command prompt on your Windows VM and type: fsutil behavior query DisableDeleteNotify If the command returns 0, TRIM is enabled correctly. If it returns 1, run the following command to enable TRIM: fsutil behavior set DisableDeleteNotify 0 Next steps Attach a disk to add additional storage for your VM. Upload a Windows VM image to Azure to use when creating a new VM. Change the drive letter of the Windows temporary disk so your application can use the D: drive for data. Azure Virtual Network 5/1/2017 • 6 min to read • Edit Online The Azure Virtual Network service enables you to securely connect Azure resources to each other with virtual networks (VNets). A VNet is a representation of your own network in the cloud. A VNet is a logical isolation of the Azure cloud dedicated to your subscription. You can also connect VNets to your on-premises network. The following picture shows some of the capabilities of the Azure Virtual Network service: To learn more about the following Azure Virtual Network capabilities, click the capability: Isolation: VNets are isolated from one another. You can create separate VNets for development, testing, and production that use the same CIDR address blocks. Conversely, you can create multiple VNets that use different CIDR address blocks and connect networks together. You can segment a VNet into multiple subnets. Azure provides internal name resolution for VMs and Cloud Services role instances connected to a VNet. You can optionally configure a VNet to use your own DNS servers, instead of using Azure internal name resolution. Internet connectivity: All Azure Virtual Machines (VM) and Cloud Services role instances connected to a VNet have access to the Internet, by default. You can also enable inbound access to specific resources, as needed. Azure resource connectivity: Azure resources such as Cloud Services and VMs can be connected to the same VNet. The resources can connect to each other using private IP addresses, even if they are in different subnets. Azure provides default routing between subnets, VNets, and on-premises networks, so you don't have to configure and manage routes. VNet connectivity: VNets can be connected to each other, enabling resources connected to any VNet to communicate with any resource on any other VNet. On-premises connectivity: VNets can be connected to on-premises networks through private network connections between your network and Azure, or through a site-to-site VPN connection over the Internet. Traffic filtering: VM and Cloud Services role instances network traffic can be filtered inbound and outbound by source IP address and port, destination IP address and port, and protocol. Routing: You can optionally override Azure's default routing by configuring your own routes, or using BGP routes through a network gateway. Network isolation and segmentation You can implement multiple VNets within each Azure subscription and Azure region. Each VNet is isolated from other VNets. For each VNet you can: Specify a custom private IP address space using public and private (RFC 1918) addresses. Azure assigns resources connected to the VNet a private IP address from the address space you assign. Segment the VNet into one or more subnets and allocate a portion of the VNet address space to each subnet. Use Azure-provided name resolution or specify your own DNS server for use by resources connected to a VNet. To learn more about name resolution in VNets, read the Name resolution for VMs and Cloud Services article. Connect to the Internet All resources connected to a VNet have outbound connectivity to the Internet by default. The private IP address of the resource is source network address translated (SNAT) to a public IP address by the Azure infrastructure. To learn more about outbound Internet connectivity, read the Understanding outbound connections in Azure article. You can change the default connectivity by implementing custom routing and traffic filtering. To communicate inbound to Azure resources from the Internet, or to communicate outbound to the Internet without SNAT, a resource must be assigned a public IP address. To learn more about public IP addresses, read the Public IP addresses article. Connect Azure resources You can connect several Azure resources to a VNet, such as Virtual Machines (VM), Cloud Services, App Service Environments, and Virtual Machine Scale Sets. VMs connect to a subnet within a VNet through a network interface (NIC). To learn more about NICs, read the Network interfaces article. Connect virtual networks You can connect VNets to each other, enabling resources connected to either VNet to communicate with each other across VNets. You can use either or both of the following options to connect VNets to each other: Peering: Enables resources connected to different Azure VNets within the same Azure location to communicate with each other. The bandwidth and latency across the VNets is the same as if the resources were connected to the same VNet. To learn more about peering, read the Virtual network peering article. VNet-to-VNet connection: Enables resources connected to different Azure VNet within the same, or different Azure locations. Unlike peering, bandwidth is limited between VNets because traffic must flow through an Azure VPN Gateway. To learn more about connecting VNets with a VNet-to-VNet connection, read the Configure a VNet-to-VNet connection article. Connect to an on-premises network You can connect your on-premises network to a VNet using any combination of the following options: Point-to-site virtual private network (VPN): Established between a single PC connected to your network and the VNet. This connection type is great if you're just getting started with Azure, or for developers, because it requires little or no changes to your existing network. The connection uses the SSTP protocol to provide encrypted communication over the Internet between the PC and the VNet. The latency for a point-to-site VPN is unpredictable, since the traffic traverses the Internet. Site-to-site VPN: Established between your VPN device and an Azure VPN Gateway. This connection type enables any on-premises resource you authorize to access a VNet. The connection is an IPSec/IKE VPN that provides encrypted communication over the Internet between your on-premises device and the Azure VPN gateway. The latency for a site-to-site connection is unpredictable, since the traffic traverses the Internet. Azure ExpressRoute: Established between your network and Azure, through an ExpressRoute partner. This connection is private. Traffic does not traverse the Internet. The latency for an ExpressRoute connection is predictable, since traffic doesn't traverse the Internet. To learn more about all the previous connection options, read the Connection topology diagrams article. Filter network traffic You can filter network traffic between subnets using either or both of the following options: Network security groups (NSG): Each NSG can contain multiple inbound and outbound security rules that enable you to filter traffic by source and destination IP address, port, and protocol. You can apply an NSG to each NIC in a VM. You can also apply an NSG to the subnet a NIC, or other Azure resource, is connected to. To learn more about NSGs, read the Network security groups article. Network virtual appliances (NVA): An NVA is a VM running software that performs a network function, such as a firewall. View a list of available NVAs in the Azure Marketplace. NVAs are also available that provide WAN optimization and other network traffic functions. NVAs are typically used with user-defined or BGP routes. You can also use an NVA to filter traffic between VNets. Route network traffic Azure creates route tables that enable resources connected to any subnet in any VNet to communicate with each other, by default. You can implement either or both of the following options to override the default routes Azure creates: User-defined routes: You can create custom route tables with routes that control where traffic is routed to for each subnet. To learn more about user-defined routes, read the User-defined routes article. BGP routes: If you connect your VNet to your on-premises network using an Azure VPN Gateway or ExpressRoute connection, you can propagate BGP routes to your VNets. Pricing There is no charge for virtual networks, subnets, route tables, or network security groups. Outbound Internet bandwidth usage, public IP addresses, virtual network peering, VPN Gateways, and ExpressRoute each have their own pricing structures. View the Virtual network, VPN Gateway, and ExpressRoute pricing pages for more information. FAQ To review frequently asked questions about Virtual Network, see the Virtual Network FAQ article. Next steps Create your first VNet, and connect a few VMs to it, by completing the steps in the Create your first virtual network article. Create a point-to-site connection to a VNet by completing the steps in the Configure a point-to-site connection article. Frequently asked question about Azure Windows virtual machines created with the classic deployment model 3/30/2017 • 11 min to read • Edit Online IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. For the FAQ when using the Resource Manager model, see here. This article addresses some common questions users ask about Azure virtual machines created with the classic deployment model. Can I migrate my VM created in the classic deployment model to the new Resource Manager model? Yes. For instructions on how to migrate, see: Migrate from classic to Azure Resource Manager using Azure PowerShell. Migrate from classic to Azure Resource Manager using Azure CLI. What can I run on an Azure VM? All subscribers can run server software on an Azure virtual machine. You can run recent versions of Windows Server, as well as a variety of Linux distributions. For support details, see: • For Windows VMs -- Microsoft server software support for Azure Virtual Machines • For Linux VMs -- Linux on Azure-Endorsed Distributions For Windows client images, certain versions of Windows 7 and Windows 8.1 are available to MSDN Azure benefit subscribers and MSDN Dev and Test Pay-As-You-Go subscribers, for development and test tasks. For details, including instructions and limitations, see Windows Client images for MSDN subscribers. Why are affinity groups being deprecated? Affinity groups are a legacy concept for a geographical grouping of a customer’s cloud service deployments and storage accounts within Azure. They were originally provided to improve VM-to-VM network performance in the early Azure network designs. They also supported the initial release of virtual networks (VNets), which were limited to a small set of hardware in a region. The current Azure network within a region is designed so that affinity groups are no longer required. Virtual networks are also at a regional scope, so an affinity group is no longer required when you use a virtual network. Due to these improvements, we no longer recommend that customers use affinity groups because they can be limiting in some scenarios. Using affinity groups will unnecessarily associate your VMs to specific hardware that limits the choice of VM sizes that are available to you. It might also lead to capacity-related errors when you attempt to add new VMs if the specific hardware associated with the affinity group is near capacity. Affinity group features are already deprecated in the Azure Resource Manager deployment model and in the Azure portal. For the classic Azure portal, we're deprecating support for creating affinity groups and creating storage resources that are pinned to an affinity group. You don't need to modify existing cloud services that are using an affinity group. However, you should not use affinity groups for new cloud services unless an Azure support professional recommends them. How much storage can I use with a virtual machine? Each data disk can be up to 1 TB. The number of data disks you can use depends on the size of the virtual machine. For details, see Sizes for Virtual Machines. An Azure storage account provides storage for the operating system disk and any data disks. Each disk is a .vhd file stored as a page blob. For pricing details, see Storage Pricing Details. Which virtual hard disk types can I use? Azure only supports fixed, VHD-format virtual hard disks. If you have a VHDX that you want to use in Azure, you need to first convert it by using Hyper-V Manager or the convert-VHD cmdlet. After you do that, use AddAzureVHD cmdlet (in Service Management mode) to upload the VHD to a storage account in Azure so you can use it with virtual machines. For Linux instructions, see Creating and Uploading a Virtual Hard Disk that Contains the Linux Operating System. For Windows instructions, see Create and upload a Windows Server VHD to Azure. Are these virtual machines the same as Hyper-V virtual machines? In many ways they’re similar to “Generation 1” Hyper-V VMs, but they’re not exactly the same. Both types provide virtualized hardware, and the VHD-format virtual hard disks are compatible. This means you can move them between Hyper-V and Azure. Three key differences that sometimes surprise Hyper-V users are: Azure doesn’t provide console access to a virtual machine. There is no way to access a VM until it is done booting. Azure VMs in most sizes have only 1 virtual network adapter, which means that they also can have only 1 external IP address. (The A8 and A9 sizes use a second network adapter for application communication between instances in limited scenarios.) Azure VMs don't support Generation 2 Hyper-V VM features. For details about these features, see Virtual Machine Specifications for Hyper-V and Generation 2 Virtual Machine Overview. Can these virtual machines use my existing, on-premises networking infrastructure? For virtual machines created in the classic deployment model, you can use Azure Virtual Network to extend your existing infrastructure. The approach is like setting up a branch office. You can provision and manage virtual private networks (VPNs) in Azure as well as securely connect them to on-premises IT infrastructure. For details, see Virtual Network Overview. You’ll need to specify the network that you want the virtual machine to belong to when you create the virtual machine. You can’t join an existing virtual machine to a virtual network. However, you can work around this by detaching the virtual hard disk (VHD) from the existing virtual machine, and then use it to create a new virtual machine with the networking configuration you want. How can I access my virtual machine? You need to establish a remote connection to log on to the virtual machine by using Remote Desktop Connection for a Windows VM or a Secure Shell (SSH) for a Linux VM. For instructions, see: How to Log on to a Virtual Machine Running Windows Server. A maximum of 2 concurrent connections are supported, unless the server is configured as a Remote Desktop Services session host. How to Log on to a Virtual Machine Running Linux. By default, SSH allows a maximum of 10 concurrent connections. You can increase this number by editing the configuration file. If you’re having problems with Remote Desktop or SSH, install and use the VMAccess extension to help fix the problem. For Windows VMs, additional options include: In the Azure classic portal, find the VM, then click Reset Remote Access from the Command bar. Review Troubleshoot Remote Desktop connections to a Windows-based Azure Virtual Machine. Use Windows PowerShell Remoting to connect to the VM, or create additional endpoints for other resources to connect to the VM. For details, see How to Set Up Endpoints to a Virtual Machine. If you’re familiar with Hyper-V, you might be looking for a tool similar to VMConnect. Azure doesn’t offer a similar tool because console access to a virtual machine isn’t supported. Can I use the temporary disk (the D: drive for Windows or /dev/sdb1 for Linux) to store data? You shouldn’t use the temporary disk (the D: drive by default for Windows or /dev/sdb1 for Linux) to store data. They are only temporary storage, so you would risk losing data that can’t be recovered. This can occur when the virtual machine moves to a different host. Resizing a virtual machine, updating the host, or a hardware failure on the host are some of the reasons a virtual machine might move. How can I change the drive letter of the temporary disk? On a Windows virtual machine, you can change the drive letter by moving the page file and reassigning drive letters, but you’ll need to make sure you do the steps in a specific order. For instructions, see Change the drive letter of the Windows temporary disk. How can I upgrade the guest operating system? The term upgrade generally means moving to a more recent release of your operating system, while staying on the same hardware. For Azure VMs, the process for moving to a more recent release differs for Linux and Windows: For Linux VMs, use the package management tools and procedures appropriate for the distribution. For a Windows virtual machine, you need to migrate the server using something like the Windows Server Migration Tools. Don’t attempt to upgrade the guest OS while it resides on Azure. It isn’t supported because of the risk of losing access to the virtual machine. If problems occur during the upgrade, you could lose the ability to start a Remote Desktop session and wouldn’t be able to troubleshoot the problems. For general details about the tools and processes for migrating a Windows Server, see Migrate Roles and Features to Windows Server. What's the default user name and password on the virtual machine? The images provided by Azure don’t have a pre-configured user name and password. When you create virtual machine using one of those images, you’ll need to provide a user name and password, which you’ll use to log on to the virtual machine. If you’ve forgotten the user name or password and you’ve installed the VM Agent, you can install and use the VMAccess extension to fix the problem. Additional details: For the Linux images, if you use the Azure classic portal, ‘azureuser’ is given as a default user name, but you can change this by using ‘From Gallery’ instead of ‘Quick Create’ as the way to create the virtual machine. Using ‘From Gallery’ also lets you decide whether to use a password, an SSH key, or both to log you in. The user account is a non-privileged user that has ‘sudo’ access to run privileged commands. The ‘root’ account is disabled. For Windows images, you’ll need to provide a user name and password when you create the VM. The account is added to the Administrators group. Can Azure run anti-virus on my virtual machines? Azure offers several options for anti-virus solutions, but it’s up to you to manage it. For example, you might need a separate subscription for antimalware software, and you’ll need to decide when to run scans and install updates. You can add anti-virus support with a VM extension for Microsoft Antimalware, Symantec Endpoint Protection, or TrendMicro Deep Security Agent when you create a Windows virtual machine, or at a later point. The Symantec and TrendMicro extensions let you use a free limited-time trial subscription or an existing enterprise subscription. Microsoft Antimalware is free of charge. For details, see: How to install and configure Symantec Endpoint Protection on an Azure VM How to install and configure Trend Micro Deep Security as a Service on an Azure VM Deploying Antimalware Solutions on Azure Virtual Machines What are my options for backup and recovery? Azure Backup is available as a preview in certain regions. For details, see Back up Azure virtual machines. Other solutions are available from certified partners. To find out what’s currently available, search the Azure Marketplace. An additional option is to use the snapshot capabilities of blob storage. To do this, you’ll need to shut down the VM before any operation that relies on a blob snapshot. This saves pending data writes and puts the file system in a consistent state. How does Azure charge for my VM? Azure charges an hourly price based on the VM’s size and operating system. For partial hours, Azure charges only for the minutes of use. If you create the VM with a VM image containing certain pre-installed software, additional hourly software charges may apply. Azure charges separately for storage for the VM’s operating system and data disks. Temporary disk storage is free. You are charged when the VM status is Running or Stopped, but you are not charged when the VM status is Stopped (De-allocated). To put a VM in the Stopped (De-allocated) state, do one of the following: Shut down or delete the VM from the Azure classic portal. Use the Stop-AzureVM cmdlet, available in the Azure PowerShell module. Use the Shutdown Role operation in the Service Management REST API and specify StoppedDeallocated for the PostShutdownAction element. For more details, see Virtual Machines Pricing. Will Azure reboot my VM for maintenance? Azure sometimes restarts your VM as part of regular, planned maintenance updates in the Azure datacenters. Unplanned maintenance events can occur when Azure detects a serious hardware problem that affects your VM. For unplanned events, Azure automatically migrates the VM to a healthy host and restarts the VM. For any standalone VM (meaning the VM isn’t part of an availability set), Azure notifies the subscription’s Service Administrator by email at least one week before planned maintenance because the VMs could be restarted during the update. Applications running on the VMs could experience downtime. You also can use the Azure classic portal or Azure PowerShell to view the reboot logs when the reboot occurred due to planned maintenance. For details, see Viewing VM Reboot Logs. To provide redundancy, put two or more similarly configured VMs in the same availability set. This helps ensure at least one VM is available during planned or unplanned maintenance. Azure guarantees certain levels of VM availability for this configuration. For details, see Manage the availability of virtual machines. Additional resources About Azure Virtual Machines Different Ways to Create a Linux Virtual Machine Different Ways to Create a Windows Virtual Machine Azure App Service, Virtual Machines, Service Fabric, and Cloud Services comparison 3/23/2017 • 12 min to read • Edit Online Overview Azure offers several ways to host web sites: Azure App Service, Virtual Machines, Service Fabric, and Cloud Services. This article helps you understand the options and make the right choice for your web application. Azure App Service is the best choice for most web apps. Deployment and management are integrated into the platform, sites can scale quickly to handle high traffic loads, and the built-in load balancing and traffic manager provide high availability. You can move existing sites to Azure App Service easily with an online migration tool, use an open-source app from the Web Application Gallery, or create a new site using the framework and tools of your choice. The WebJobs feature makes it easy to add background job processing to your App Service web app. Service Fabric is a good choice if you’re creating a new app or re-writing an existing app to use a microservice architecture. Apps, which run on a shared pool of machines, can start small and grow to massive scale with hundreds or thousands of machines as needed. Stateful services make it easy to consistently and reliably store app state, and Service Fabric automatically manages service partitioning, scaling, and availability for you. Service Fabric also supports WebAPI with Open Web Interface for .NET (OWIN) and ASP.NET Core. Compared to App Service, Service Fabric also provides more control over, or direct access to, the underlying infrastructure. You can remote into your servers or configure server startup tasks. Cloud Services is similar to Service Fabric in degree of control versus ease of use, but it’s now a legacy service and Service Fabric is recommended for new development. If you have an existing application that would require substantial modifications to run in App Service or Service Fabric, you could choose Virtual Machines in order to simplify migrating to the cloud. However, correctly configuring, securing, and maintaining VMs requires much more time and IT expertise compared to Azure App Service and Service Fabric. If you are considering Azure Virtual Machines, make sure you take into account the ongoing maintenance effort required to patch, update, and manage your VM environment. Azure Virtual Machines is Infrastructure-as-a-Service (IaaS), while App Service and Service Fabric are Platform-as-a-Service (Paas). Feature Comparison The following table compares the capabilities of App Service, Cloud Services, Virtual Machines, and Service Fabric to help you make the best choice. For current information about the SLA for each option, see Azure Service Level Agreements. FEATURE APP SERVICE (WEB APPS) CLOUD SERVICES (WEB ROLES) VIRTUAL MACHINES SERVICE FABRIC NOTES FEATURE APP SERVICE (WEB APPS) CLOUD SERVICES (WEB ROLES) VIRTUAL MACHINES SERVICE FABRIC NOTES Deploying an application or an application update to a Cloud Service, or creating a VM, takes several minutes at least; deploying an application to a web app takes seconds. Near-instant deployment X X Scale up to larger machines without redeploy X X Web server instances share content and configuration, which means you don't have to redeploy or reconfigure as you scale. X X Multiple deployment environments (production and staging) X X Automatic OS update management X X Seamless platform switching (easily move between 32 bit and 64 bit) X X Deploy code with GIT, FTP X X Service Fabric allows you to have multiple environments for your apps or to deploy different versions of your app side-by-side. Automatic OS updates are planned for a future Service Fabric release. X FEATURE APP SERVICE (WEB APPS) CLOUD SERVICES (WEB ROLES) VIRTUAL MACHINES SERVICE FABRIC Deploy code with Web Deploy X X WebMatrix support X X Access to services like Service Bus, Storage, SQL Database X X X X Host web or web services tier of a multi-tier architecture X X X X Host middle tier of a multi-tier architecture X X X X NOTES Cloud Services supports the use of Web Deploy to deploy updates to individual role instances. However, you can't use it for initial deployment of a role, and if you use Web Deploy for an update you have to deploy separately to each instance of a role. Multiple instances are required in order to qualify for the Cloud Service SLA for production environments. App Service web apps can easily host a REST API middle tier, and the WebJobs feature can host background processing jobs. You can run WebJobs in a dedicated website to achieve independent scalability for the tier. The preview API apps feature provides even more features for hosting REST services. APP SERVICE (WEB APPS) CLOUD SERVICES (WEB ROLES) VIRTUAL MACHINES Integrated MySQL-as-aservice support X X X Support for ASP.NET, classic ASP, Node.js, PHP, Python X X X X Service Fabric supports the creation of a web front-end using ASP.NET 5 or you can deploy any type of application (Node.js, Java, etc) as a guest executable. Scale out to multiple instances without redeploy X X X X Virtual Machines can scale out to multiple instances, but the services running on them must be written to handle this scale-out. You have to configure a load balancer to route requests across the machines, and create an Affinity Group to prevent simultaneous restarts of all instances due to maintenance or hardware failures. Support for SSL X X X X For App Service web apps, SSL for custom domain names is only supported for Basic and Standard mode. For information about using SSL with web apps, see Configuring an SSL certificate for an Azure Website. FEATURE SERVICE FABRIC NOTES Cloud Services can integrate MySQL-as-aservice through ClearDB's offerings, but not as part of the Azure Portal workflow. APP SERVICE (WEB APPS) CLOUD SERVICES (WEB ROLES) VIRTUAL MACHINES SERVICE FABRIC Visual Studio integration X X X X Remote Debugging X X X Deploy code with TFS X X X X Network isolation with Azure Virtual Network X X X X Support for Azure Traffic Manager X X X X Integrated Endpoint Monitoring X X X Remote desktop access to servers X X X Install any custom MSI X X X Ability to define/execute start-up tasks X X X Can listen to ETW events X X X FEATURE NOTES See also Azure Websites Virtual Network Integration Service Fabric allows you to host any executable file as a guest executable or you can install any app on the VMs. Scenarios and recommendations Here are some common application scenarios with recommendations as to which Azure web hosting option might be most appropriate for each. I need a web front end with background processing and database backend to run business applications integrated with on premise assets. I need a reliable way to host my corporate website that scales well and offers global reach. I have an IIS6 application running on Windows Server 2003. I'm a small business owner, and I need an inexpensive way to host my site but with future growth in mind. I'm a web or graphic designer, and I want to design and build web sites for my customers. I'm migrating my multi-tier application with a web front-end to the Cloud. My application depends on highly customized Windows or Linux environments and I want to move it to the cloud. My site uses open source software, and I want to host it in Azure. I have a line-of-business application that needs to connect to the corporate network. I want to host a REST API or web service for mobile clients. I need a web front end with background processing and database backend to run business applications integrated with on premise assets. Azure App Service is a great solution for complex business applications. It lets you develop apps that scale automatically on a load balanced platform, are secured with Active Directory, and connect to your on-premises resources. It makes managing those apps easy through a world-class portal and APIs, and allows you to gain insight into how customers are using them with app insight tools. The Webjobs feature lets you run background processes and tasks as part of your web tier, while hybrid connectivity and VNET features make it easy to connect back to on-premises resources. Azure App Service provides three 9's SLA for web apps and enables you to: Run your applications reliably on a self-healing, auto-patching cloud platform. Scale automatically across a global network of datacenters. Back up and restore for disaster recovery. Be ISO, SOC2, and PCI compliant. Integrate with Active Directory I need a reliable way to host my corporate website that scales well and offers global reach. Azure App Service is a great solution for hosting corporate websites. It enables web apps to scale quickly and easily to meet demand across a global network of datacenters. It offers local reach, fault tolerance, and intelligent traffic management. All on a platform that provides world-class management tools, allowing you to gain insight into site health and site traffic quickly and easily. Azure App Service provides three 9's SLA for web apps and enables you to: Run your websites reliably on a self-healing, auto-patching cloud platform. Scale automatically across a global network of datacenters. Back up and restore for disaster recovery. Manage logs and traffic with integrated tools. Be ISO, SOC2, and PCI compliant. Integrate with Active Directory I have an IIS6 application running on Windows Server 2003. Azure App Service makes it easy to avoid the infrastructure costs associated with migrating older IIS6 applications. Microsoft has created easy to use migration tools and detailed migration guidance that enable you to check compatibility and identify any changes that need to be made. Integration with Visual Studio, TFS, and common CMS tools makes it easy to deploy IIS6 applications directly to the cloud. Once deployed, the Azure Portal provides robust management tools that enable you to scale down to manage costs and up to meet demand as necessary. With the migration tool you can: Quickly and easily migrate your legacy Windows Server 2003 web application to the cloud. Opt to leave your attached SQL database on-premise to create a hybrid application. Automatically move your SQL database along with your legacy application. I'm a small business owner, and I need an inexpensive way to host my site but with future growth in mind. Azure App Service is a great solution for this scenario, because you can start using it for free and then add more capabilities when you need them. Each free web app comes with a domain provided by Azure (your_company.azurewebsites.net), and the platform includes integrated deployment and management tools as well as an application gallery that make it easy to get started. There are many other services and scaling options that allow the site to evolve with increased user demand. With Azure App Service, you can: Begin with the free tier and then scale up as needed. Use the Application Gallery to quickly set up popular web applications, such as WordPress. Add additional Azure services and features to your application as needed. Secure your web app with HTTPS. I'm a web or graphic designer, and I want to design and build websites for my customers For web developers and designers, Azure App Service integrates easily with a variety of frameworks and tools, includes deployment support for Git and FTP, and offers tight integration with tools and services such as Visual Studio and SQL Database. With App Service, you can: Use command-line tools for automated tasks. Work with popular languages such as .Net, PHP, Node.js, and Python. Select three different scaling levels for scaling up to very high capacities. Integrate with other Azure services, such as SQL Database, Service Bus and Storage, or partner offerings from the Azure Store, such as MySQL and MongoDB. Integrate with tools such as Visual Studio, Git, WebMatrix, WebDeploy, TFS, and FTP. I'm migrating my multi-tier application with a web front-end to the Cloud If you’re running a multi-tier application, such as a web server that connects to a database, Azure App Service is a good option that offers tight integration with Azure SQL Database. And you can use the WebJobs feature for running backend processes. Choose Service Fabric for one or more of your tiers if you need more control over the server environment, such as the ability to remote into your server or configure server startup tasks. Choose Virtual Machines for one or more of your tiers if you want to use your own machine image or run server software or services that you can't configure on Service Fabric. My application depends on highly customized Windows or Linux environments and I want to move it to the cloud. If your application requires complex installation or configuration of software and the operating system, Virtual Machines is probably the best solution. With Virtual Machines, you can: Use the Virtual Machine gallery to start with an operating system, such as Windows or Linux, and then customize it for your application requirements. Create and upload a custom image of an existing on-premises server to run on a virtual machine in Azure. My site uses open source software, and I want to host it in Azure If your open source framework is supported on App Service, the languages and frameworks needed by your application are configured for you automatically. App Service enables you to: Use many popular open source languages, such as .NET, PHP, Node.js, and Python. Set up WordPress, Drupal, Umbraco, DNN, and many other third-party web applications. Migrate an existing application or create a new one from the Application Gallery. If your open source framework is not supported on App Service, you can run it on one of the other Azure web hosting options. With Virtual Machines, you install and configure the software on the machine image, which can be Windows or Linux-based. I have a line -of-business application that needs to connect to the corporate network If you want to create a line-of-business application, your website might require direct access to services or data on the corporate network. This is possible on App Service, Service Fabric, and Virtual Machines using the Azure Virtual Network service. On App Service you can use the VNET integration feature, which allows your Azure applications to run as if they were on your corporate network. I want to host a REST API or web service for mobile clients HTTP-based web services enable you to support a wide variety of clients, including mobile clients. Frameworks like ASP.NET Web API integrate with Visual Studio to make it easier to create and consume REST services. These services are exposed from a web endpoint, so it is possible to use any web hosting technique on Azure to support this scenario. However, App Service is a great choice for hosting REST APIs. With App Service, you can: Quickly create a mobile app or API app to host the HTTP web service in one of Azure’s globally distributed datacenters. Migrate existing services or create new ones. Achieve SLA for availability with a single instance, or scale out to multiple dedicated machines. Use the published site to provide REST APIs to any HTTP clients, including mobile clients. NOTE If you want to get started with Azure App Service before signing up for an account, go to https://trywebsites.azurewebsites.net, where you can immediately create a short-lived starter app in Azure App Service for free. No credit card required, no commitments. Next Steps For more information about the three web hosting options, see Introducing Azure. To get started with the option(s) you choose for your application, see the following resources: Azure App Service Azure Cloud Services Azure Virtual Machines Service Fabric 1 min to read • Edit O nline Create a virtual machine running Windows in the Azure portal 3/27/2017 • 3 min to read • Edit Online IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. Learn how to perform these steps using the Resource Manager deployment model using the Azure portal. This tutorial shows you how to create an Azure virtual machine (VM) running Windows in the Azure portal. We'll use a Windows Server image as an example, but that's just one of the many images Azure offers. Note that your image choices depend on your subscription. For example, Windows desktop images may be available to MSDN subscribers. This section shows you how to use the Dashboard in the Azure portal to select and then create the virtual machine. You can also create VMs using your own images. To learn about this and other methods, see Different ways to create a Windows virtual machine. Create the virtual machine 1. Sign in to the Azure portal. 2. Starting in the upper left, click New > Compute > Windows Server 2016 Datacenter. 3. On the Windows Server 2016 Datacenter, select the Classic deployment model. Click Create. 1. Basics blade The Basics blade requests administrative information for the virtual machine. 1. Enter a Name for the virtual machine. In the example, HeroVM is the name of the virtual machine. The name must be 1-15 characters long and it cannot contain special characters. 2. Enter a User name and a strong Password that are used to create a local account on the VM. The local account is used to sign in to and manage the VM. In the example, azureuser is the user name. The password must be 8-123 characters long and meet three out of the four following complexity requirements: one lower case character, one upper case character, one number, and one special character. See more about username and password requirements. 3. The Subscription is optional. One common setting is "Pay-As-You-Go". 4. Select an existing Resource group or type the name for a new one. In the example, HeroVMRG is the name of the resource group. 5. Select an Azure datacenter Location where you want the VM to run. In the example, East US is the location. 6. When you are done, click Next to continue to the next blade. 2. Size blade The Size blade identifies the configuration details of the VM, and lists various choices that include OS, number of processors, disk storage type, and estimated monthly usage costs. Choose a VM size, and then click Select to continue. In this example, DS1_V2 Standard is the VM size. 3. Settings blade The Settings blade requests storage and network options. You can accept the default settings. Azure creates appropriate entries where necessary. If you selected a virtual machine size that supports it, you can try Azure Premium Storage by selecting Premium (SSD) in Disk type. When you're done making changes, click OK. 4. Summary blade The Summary blade lists the settings specified in the previous blades. Click OK when you're ready to make the image. After the virtual machine is created, the portal lists the new virtual machine under All resources, and displays a tile of the virtual machine on the dashboard. The corresponding cloud service and storage account also are created and listed. Both the virtual machine and cloud service are started automatically and their status is listed as Running. Next steps Learn how to create a VM using the Resource Manager deployment model in the Azure portal. Log on to the virtual machine. For instructions, see Log on to a virtual machine running Windows Server. Attach a disk to store data. You can attach both empty disks and disks that contain data. For instructions, see the Attach a data disk to a Windows virtual machine created with the classic deployment model. Log on to a Windows virtual machine using the Azure portal 4/3/2017 • 2 min to read • Edit Online In the Azure portal, you use the Connect button to start a Remote Desktop session and log on to a Windows VM. Do you want to connect to a Linux VM? See How to log on to a virtual machine running Linux. IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. For information about how to log on to a VM using the Resource Manager model, see here. Connect to the virtual machine 1. Sign in to the Azure portal. 2. Click on the virtual machine that you want to access. The name is listed in the All resources pane. 3. Click Connect on the command bar atop the virtual machine dashboard. Log on to the virtual machine 1. Clicking Connect creates and downloads a Remote Desktop Protocol file (.rdp file). Click Open to use this file. 2. You will get a warning that the .rdp is from an unknown publisher. This is normal. In the Remote Desktop window, click Connect to continue. 3. In the Windows Security window, type the credentials for an account on the virtual machine and then click OK. Local account - this is usually the local account user name and password that you specified when you created the virtual machine. In this case, the domain is the name of the virtual machine and it is entered as vmname\username. Domain joined VM - if the VM belongs to a domain, enter the user name in the format Domain\Username. The account also needs to either be in the Administrators group or have been granted remote access privileges to the VM. Domain controller - if the VM is a domain controller, type the user name and password of a domain administrator account for that domain. 4. Click Yes to verify the identity of the virtual machine and finish logging on. Next steps If the Connect button is inactive or you are having other problems with the Remote Desktop connection, try resetting the configuration. click Reset remote access from the virtual machine dashboard. For problems with your password, try resetting it. Click Reset password along the left edge of virtual machine dashboard, under Support + Troubleshooting. If those tips don't work or aren't what you need, see Troubleshoot Remote Desktop connections to a Windowsbased Azure Virtual Machine. This article walks you through diagnosing and resolving common problems. Install the Azure CLI 1.0 4/27/2017 • 3 min to read • Edit Online IMPORTANT This topic describes how to install the Azure CLI 1.0, which is built on nodeJs and supports all classic deployment API calls as well as a large number of Resource Manager deployment activities. You should use the Azure CLI 2.0 for new or forwardlooking CLI deployments and management. Quickly install the Azure Command-Line Interface (Azure CLI 1.0) to use a set of open-source shell-based commands for creating and managing resources in Microsoft Azure. You have several options to install these cross-platform tools on your computer: npm package - Run npm (the package manager for JavaScript) to install the latest Azure CLI 1.0 package on your Linux distribution or OS. Requires node.js and npm on your computer. Installer - Download an installer for easy installation on Mac or Windows. Docker container - Start using the latest CLI in a ready-to-run Docker container. Requires Docker host on your computer. For more options and background, see the project repository on GitHub. Once the Azure CLI 1.0 is installed, connect it with your Azure subscription and run the azure commands from your command-line interface (Bash, Terminal, Command prompt, and so on) to work with your Azure resources. Option 1: Install an npm package To install the CLI from an npm package, make sure you have downloaded and installed the latest Node.js and npm. Then, run npm install to install the azure-cli package: npm install -g azure-cli On Linux distributions, you might need to use sudo to successfully run the npm command, as follows: sudo npm install -g azure-cli NOTE If you need to install or update Node.js and npm on your Linux distribution or OS, we recommend that you install the most recent Node.js LTS version (4.x). If you use an older version, you might get installation errors. If you prefer, download the latest Linux tar file for the npm package locally. Then, install the downloaded npm package as follows (on Linux distributions you might need to use sudo): npm install -g <path to downloaded tar file> Option 2: Use an installer If you use a Mac or Windows computer, the following CLI installers are available for download: Mac OS X installer Windows MSI TIP On Windows, you can also download the Web Platform Installer to install the CLI. This installer gives you the option to install additional Azure SDK and command-line tools after installing the CLI. Option 3: Use a Docker container If you have set up your computer as a Docker host, you can run the latest Azure CLI 1.0 in a Docker container. Run the following command (on Linux distributions you might need to use sudo): docker run -it microsoft/azure-cli Run Azure CLI 1.0 commands After the Azure CLI 1.0 is installed, run the azure command from your command-line user interface (Bash, Terminal, Command prompt, and so on). For example, to run the help command, type the following: azure help NOTE On some Linux distributions, you may receive an error similar to /usr/bin/env: ‘node’: No such file or directory . This error comes from recent installations of Node.js being installed at /usr/bin/nodejs. To fix it, create a symbolic link to /usr/bin/node by running this command: sudo ln -s /usr/bin/nodejs /usr/bin/node To see the version of the Azure CLI 1.0 you installed, type the following: azure --version Now you are ready! To access all the CLI commands to work with your own resources, connect to your Azure subscription from the Azure CLI. NOTE When you first use Azure CLI, you see a message asking if you want to allow Microsoft to collect usage information. Participation is voluntary. If you choose to participate, you can stop at any time by running azure telemetry --disable . To enable participation at any time, run azure telemetry --enable . Update the CLI Microsoft frequently releases updated versions of the Azure CLI. Reinstall the CLI using the installer for your operating system, or run the latest Docker container. Or, if you have the latest Node.js and npm installed, update by typing the following (on Linux distributions you might need to use sudo). npm update -g azure-cli Enable tab completion Tab completion of CLI commands is supported for Mac and Linux. To enable it in zsh, run: echo '. <(azure --completion)' >> .zshrc To enable it in bash, run: azure --completion >> ~/azure.completion.sh echo 'source ~/azure.completion.sh' >> ~/.bash_profile Next steps Connect from the CLI to your Azure subscription to create and manage Azure resources. To learn more about the Azure CLI, download source code, report problems, or contribute to the project, visit the GitHub repository for the Azure CLI. If you have questions about using the Azure CLI, or Azure, visit the Azure Forums. Attach a data disk to a Windows virtual machine created with the classic deployment model 4/3/2017 • 3 min to read • Edit Online This article shows you how to attach new and existing disks created with the Classic deployment model to a Windows virtual machine using the Azure portal. You can also attach a data disk to a Linux VM in the Azure portal. Before you attach a disk, review these tips: The size of the virtual machine controls how many data disks you can attach. For details, see Sizes for virtual machines. To use Premium storage, you need a DS-series or GS-series virtual machine. You can use disks from both Premium and Standard storage accounts with these virtual machines. Premium storage is available in certain regions. For details, see Premium Storage: High-Performance Storage for Azure Virtual Machine Workloads. For a new disk, you don't need to create it first because Azure creates it when you attach it. You can also attach a data disk using Powershell. IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. Find the virtual machine 1. Sign in to the Azure portal. 2. Select the virtual machine from the resource listed on the dashboard. 3. In the left pane under Settings, click Disks. Continue by following instructions for attaching either a new disk or an existing disk. Option 1: Attach and initialize a new disk 1. On the Disks blade, click Attach new. 2. Review the default settings, update as necessary, and then click OK. 3. After Azure creates the disk and attaches it to the virtual machine, the new disk is listed in the virtual machine's disk settings under Data Disks. Initialize a new data disk 1. Connect to the virtual machine. For instructions, see How to connect and log on to an Azure virtual machine running Windows. 2. After you log on to the virtual machine, open Server Manager. In the left pane, select File and Storage Services. 3. Select Disks. 4. The Disks section lists the disks. Most often, a virtual machine has disk 0, disk 1, and disk 2. Disk 0 is the operating system disk, disk 1 is the temporary disk, and disk 2 is the data disk newly attached to the virtual machine. The data disk lists the Partition as Unknown. Right-click the disk and select Initialize. 5. You're notified that all data will be erased when the disk is initialized. Click Yes to acknowledge the warning and initialize the disk. Once complete, the partition will be listed as GPT. Right-click the disk again and select New Volume. 6. Complete the wizard using the default values. When the wizard is done, the Volumes section lists the new volume. The disk is now online and ready to store data. Option 2: Attach an existing disk 1. On the Disks blade, click Attach existing. 2. Under Attach existing disk, click Location. 3. Under Storage accounts, select the account and container that holds the .vhd file. 4. Select the .vhd file. 5. Under Attach existing disk, the file you just selected is listed under VHD File. Click OK. 6. After Azure attaches the disk to the virtual machine, it's listed in the virtual machine's disk settings under Data Disks. Use TRIM with standard storage If you use standard storage (HDD), you should enable TRIM. TRIM discards unused blocks on the disk so you are only billed for storage that you are actually using. Using TRIM can save costs, including unused blocks that result from deleting large files. You can run this command to check the TRIM setting. Open a command prompt on your Windows VM and type: fsutil behavior query DisableDeleteNotify If the command returns 0, TRIM is enabled correctly. If it returns 1, run the following command to enable TRIM: fsutil behavior set DisableDeleteNotify 0 Next steps If your application needs to use the D: drive to store data, you can change the drive letter of the Windows temporary disk. Additional resources About disks and VHDs for virtual machines How to detach a disk from a Windows virtual machine 3/27/2017 • 1 min to read • Edit Online IMPORTANT Azure has two distinct deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. For information about how to detach a disk using the Resource Manager model, see here. When you no longer need a data disk that's attached to a virtual machine, you can easily detach it. Detaching a disk removes the disk from the virtual machine, but doesn't delete the disk from the Azure storage account. If you want to use the existing data on the disk again, you can reattach it to the same virtual machine, or another one. NOTE To detach an operating system disk, you first need to delete the virtual machine. Find the disk If you don't know the name of the disk or want to verify it before you detach it, follow these steps. 1. Sign in to the Azure portal. 2. Click Virtual Machines, and then select the appropriate VM. 3. Click Disks along the left edge of the virtual machine dashboard, under Settings. The virtual machine dashboard lists the name and type of all attached disks. For example, this screen shows a virtual machine with one operating system (OS) disk and one data disk: Detach the disk 1. From the Azure portal, click Virtual Machines, and then click the name of the virtual machine that has the data disk you want to detach. 2. Click Disks along the left edge of the virtual machine dashboard, under Settings. 3. Click the disk you want to detach. 4. From the command bar, click Detach. 5. In the confirmation window, click Yes to detach the disk. The disk remains in storage but is no longer attached to a virtual machine. Additional resources About disks and VHDs for virtual machines How to attach a data disk to a Windows virtual machine 1 min to read • Edit O nline How to set up endpoints on a classic Windows virtual machine in Azure 3/30/2017 • 5 min to read • Edit Online All Windows virtual machines that you create in Azure using the classic deployment model can automatically communicate over a private network channel with other virtual machines in the same cloud service or virtual network. However, computers on the Internet or other virtual networks require endpoints to direct the inbound network traffic to a virtual machine. This article is also available for Linux virtual machines. IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. In the Resource Manager deployment model, endpoints are configured using Network Security Groups (NSGs). For more information, see Allow external access to your VM using the Azure Portal. When you create a Windows virtual machine in the Azure classic portal, common endpoints like those for Remote Desktop and Windows PowerShell Remoting are typically created for you automatically. You can configure additional endpoints while creating the virtual machine or afterwards as needed. Each endpoint has a public port and a private port: The public port is used by the Azure load balancer to listen for incoming traffic to the virtual machine from the Internet. The private port is used by the virtual machine to listen for incoming traffic, typically destined to an application or service running on the virtual machine. Default values for the IP protocol and TCP or UDP ports for well-known network protocols are provided when you create endpoints with the Azure classic portal. For custom endpoints, you'll need to specify the correct IP protocol (TCP or UDP) and the public and private ports. To distribute incoming traffic randomly across multiple virtual machines, you'll need to create a load-balanced set consisting of multiple endpoints. After you create an endpoint, you can use an access control list (ACL) to define rules that permit or deny the incoming traffic to the public port of the endpoint based on its source IP address. However, if the virtual machine is in an Azure virtual network, you should use network security groups instead. For details, see About network security groups. NOTE Firewall configuration for Azure virtual machines is done automatically for ports associated with remote connectivity endpoints that Azure sets up automatically. For ports specified for all other endpoints, no configuration is done automatically to the firewall of the virtual machine. When you create an endpoint for the virtual machine, you'll need to ensure that the firewall of the virtual machine also allows the traffic for the protocol and private port corresponding to the endpoint configuration. To configure the firewall, see the documentation or on-line help for the operating system running on the virtual machine. Create an endpoint 1. If you haven't already done so, sign in to the Azure classic portal. 2. Click Virtual Machines, and then click the name of the virtual machine that you want to configure. 3. Click Endpoints. The Endpoints page lists all the current endpoints for the virtual machine. (This example is a Windows VM. A Linux VM will by default show an endpoint for SSH.) 4. In the taskbar, click Add. 5. On the Add an endpoint to a virtual machine page, choose the type of endpoint. 6. 7. 8. 9. 10. If you're creating a new endpoint that isn't part of a load-balanced set, or is the first endpoint in a new load-balanced set, choose Add a stand-alone endpoint, then click the left arrow. Otherwise, choose Add an endpoint to an existing load-balanced set, select the name of the loadbalanced set, then click the left arrow. On the Specify the details of the endpoint page, type a name for the endpoint, then click the check mark to create the endpoint. On the Specify the details of the endpoint page, type a name for the endpoint in Name. You can also choose a network protocol name from the list, which will fill in initial values for the Protocol, Public Port, and Private Port. For a customized endpoint, in Protocol, choose either TCP or UDP. For customized ports, in Public Port, type the port number for the incoming traffic from the Internet. In Private Port, type the port number on which the virtual machine is listening. These port numbers can be different. Ensure that the firewall on the virtual machine has been configured to allow the traffic corresponding to the protocol (in step 7) and private port. If this endpoint will be the first one in a load-balanced set, click Create a load-balanced set, and then click the right arrow. On the Configure the load-balanced set page, specify a load-balanced set name, a probe protocol and port, and the probe interval and number of probes sent. The Azure load balancer sends probes to the virtual machines in a load-balanced set to monitor their availability. The Azure load balancer does not forward traffic to virtual machines that do not respond to the probe. Click the right arrow. Click the check mark to create the endpoint. The new endpoint will be listed on the Endpoints page. Manage the ACL on an endpoint To define the set of computers that can send traffic, the ACL on an endpoint can restrict traffic based upon source IP address. Follow these steps to add, modify, or remove an ACL on an endpoint. NOTE If the endpoint is part of a load-balanced set, any changes you make to the ACL on an endpoint are applied to all endpoints in the set. If the virtual machine is in an Azure virtual network, we recommend network security groups instead of ACLs. For details, see About network security groups. 1. If you haven't already done so, sign in to the Azure classic portal. 2. Click Virtual Machines, and then click the name of the virtual machine that you want to configure. 3. Click Endpoints. From the list, select the appropriate endpoint. 4. In the taskbar, click Manage ACL to open the Specify ACL details dialog box. 5. Use rows in the list to add, delete, or edit rules for an ACL and change their order. The Remote Subnet value is an IP address range for incoming traffic from the Internet that the Azure load balancer uses to permit or deny the traffic based on its source IP address. Be sure to specify the IP address range in CIDR format, also known as address prefix format. An example is 131.107.0.0/16. You can use rules to allow only traffic from specific computers corresponding to your computers on the Internet or to deny traffic from specific, known address ranges. The rules are evaluated in order starting with the first rule and ending with the last rule. This means that rules should be ordered from least restrictive to most restrictive. For examples and more information, see What is a Network Access Control List?. Next steps To use an Azure PowerShell cmdlet to set up a VM endpoint, see Add-AzureEndpoint. To use an Azure PowerShell cmdlet to manage an ACL on an endpoint, see Managing access control lists (ACLs) for endpoints by using PowerShell. If you created a virtual machine in the Resource Manager deployment model, you can use Azure PowerShell to create network security groups to control traffic to the VM. Connect Windows virtual machines created with the classic deployment model with a virtual network or cloud service 3/27/2017 • 2 min to read • Edit Online IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. Windows virtual machines created with the classic deployment model are always placed in a cloud service. The cloud service acts as a container and provides a unique public DNS name, a public IP address, and a set of endpoints to access the virtual machine over the Internet. The cloud service can be in a virtual network, but that's not a requirement. You can also connect Linux virtual machines with a virtual network or cloud service. If a cloud service isn't in a virtual network, it's called a standalone cloud service. The virtual machines in a standalone cloud service can only communicate with other virtual machines by using the other virtual machines’ public DNS names, and the traffic travels over the Internet. If a cloud service is in a virtual network, the virtual machines in that cloud service can communicate with all other virtual machines in the virtual network without sending any traffic over the Internet. If you place your virtual machines in the same standalone cloud service, you can still use load balancing and availability sets. For details, see Load balancing virtual machines and Manage the availability of virtual machines. However, you can't organize the virtual machines on subnets or connect a standalone cloud service to your onpremises network. Here's an example: If you place your virtual machines in a virtual network, you can decide how many cloud services you want to use for load balancing and availability sets. Additionally, you can organize the virtual machines on subnets in the same way as your on-premises network and connect the virtual network to your on-premises network. Here's an example: Virtual networks are the recommended way to connect virtual machines in Azure. The best practice is to configure each tier of your application in a separate cloud service. However, you may need to combine some virtual machines from different application tiers into the same cloud service to remain within the maximum of 200 cloud services per subscription. To review this and other limits, see Azure Subscription and Service Limits, Quotas, and Constraints. Connect VMs in a virtual network To connect virtual machines in a virtual network: 1. Create the virtual network in the Azure portal. 2. Create the set of cloud services for your deployment to reflect your design for availability sets and load balancing. In the Azure classic portal, click New > Compute > Cloud Service > Custom Create for each cloud service. 3. To create each new virtual machine, click New > Compute > Virtual Machine > From Gallery. Choose the correct cloud service and virtual network for the VM. If the cloud service is already joined to a virtual network, its name will already be selected for you. Connect VMs in a standalone cloud service To connect virtual machines in a standalone cloud service: 1. Create the cloud service in the Azure classic portal. Click New > Compute > Cloud Service > Custom Create. Or, you can create the cloud service for your deployment when you create your first virtual machine. 2. When you create the virtual machines, choose the name of cloud service created in the previous step. Next steps After you create a virtual machine, it's a good idea to add a data disk so your services and workloads have a location to store data. Connect virtual networks from different deployment models using PowerShell 4/27/2017 • 14 min to read • Edit Online This article shows you how to connect classic VNets to Resource Manager VNets to allow the resources located in the separate deployment models to communicate with each other. The steps in this article use PowerShell, but you can also create this configuration using the Azure portal by selecting the article from this list. Connecting a classic VNet to a Resource Manager VNet is similar to connecting a VNet to an on-premises site location. Both connectivity types use a VPN gateway to provide a secure tunnel using IPsec/IKE. You can create a connection between VNets that are in different subscriptions and in different regions. You can also connect VNets that already have connections to on-premises networks, as long as the gateway that they have been configured with is dynamic or route-based. For more information about VNet-to-VNet connections, see the VNet-to-VNet FAQ at the end of this article. If your VNets are in the same region, you may want to instead consider connecting them using VNet Peering. VNet peering does not use a VPN gateway. For more information, see VNet peering. Before beginning The following steps walk you through the settings necessary to configure a dynamic or route-based gateway for each VNet and create a VPN connection between the gateways. This configuration does not support static or policybased gateways. Prerequisites Both VNets have already been created. The address ranges for the VNets do not overlap with each other, or overlap with any of the ranges for other connections that the gateways may be connected to. You have installed the latest PowerShell cmdlets. See How to install and configure Azure PowerShell for more information. Make sure you install both the Service Management (SM) and the Resource Manager (RM) cmdlets. Example settings You can use these values to create a test environment, or refer to them to better understand the examples in this article. Classic VNet settings VNet Name = ClassicVNet Location = West US Virtual Network Address Spaces = 10.0.0.0/24 Subnet-1 = 10.0.0.0/27 GatewaySubnet = 10.0.0.32/29 Local Network Name = RMVNetLocal GatewayType = DynamicRouting Resource Manager VNet settings VNet Name = RMVNet Resource Group = RG1 Virtual Network IP Address Spaces = 192.168.0.0/16 Subnet-1 = 192.168.1.0/24 GatewaySubnet = 192.168.0.0/26 Location = East US Gateway public IP name = gwpip Local Network Gateway = ClassicVNetLocal Virtual Network Gateway name = RMGateway Gateway IP addressing configuration = gwipconfig Section 1 - Configure the classic VNet Part 1 - Download your network configuration file 1. Log in to your Azure account in the PowerShell console with elevated rights. The following cmdlet prompts you for the login credentials for your Azure Account. After logging in, it downloads your account settings so that they are available to Azure PowerShell. You use the SM PowerShell cmdlets to complete this part of the configuration. Add-AzureAccount 2. Export your Azure network configuration file by running the following command. You can change the location of the file to export to a different location if necessary. Get-AzureVNetConfig -ExportToFile C:\AzureNet\NetworkConfig.xml 3. Open the .xml file that you downloaded to edit it. For an example of the network configuration file, see the Network Configuration Schema. Part 2 -Verify the gateway subnet In the VirtualNetworkSites element, add a gateway subnet to your VNet if one has not already been created. When working with the network configuration file, the gateway subnet MUST be named "GatewaySubnet" or Azure cannot recognize and use it as a gateway subnet. IMPORTANT When working with gateway subnets, avoid associating a network security group (NSG) to the gateway subnet. Associating a network security group to this subnet may cause your VPN gateway to stop functioning as expected. For more information about network security groups, see What is a network security group? Example: <VirtualNetworkSites> <VirtualNetworkSite name="ClassicVNet" Location="West US"> <AddressSpace> <AddressPrefix>10.0.0.0/24</AddressPrefix> </AddressSpace> <Subnets> <Subnet name="Subnet-1"> <AddressPrefix>10.0.0.0/27</AddressPrefix> </Subnet> <Subnet name="GatewaySubnet"> <AddressPrefix>10.0.0.32/29</AddressPrefix> </Subnet> </Subnets> </VirtualNetworkSite> </VirtualNetworkSites> Part 3 - Add the local network site The local network site you add represents the RM VNet to which you want to connect. Add a LocalNetworkSites element to the file if one doesn't already exist. At this point in the configuration, the VPNGatewayAddress can be any valid public IP address because we haven't yet created the gateway for the Resource Manager VNet. Once we create the gateway, we replace this placeholder IP address with the correct public IP address that has been assigned to the RM gateway. <LocalNetworkSites> <LocalNetworkSite name="RMVNetLocal"> <AddressSpace> <AddressPrefix>192.168.0.0/16</AddressPrefix> </AddressSpace> <VPNGatewayAddress>13.68.210.16</VPNGatewayAddress> </LocalNetworkSite> </LocalNetworkSites> Part 4 - Associate the VNet with the local network site In this section, we specify the local network site that you want to connect the VNet to. In this case, it is the Resource Manager VNet that you referenced earlier. Make sure the names match. This step does not create a gateway. It specifies the local network that the gateway will connect to. <Gateway> <ConnectionsToLocalNetwork> <LocalNetworkSiteRef name="RMVNetLocal"> <Connection type="IPsec" /> </LocalNetworkSiteRef> </ConnectionsToLocalNetwork> </Gateway> Part 5 - Save the file and upload Save the file, then import it to Azure by running the following command. Make sure you change the file path as necessary for your environment. Set-AzureVNetConfig -ConfigurationPath C:\AzureNet\NetworkConfig.xml You will see a similar result showing that the import succeeded. OperationDescription -------------------Set-AzureVNetConfig OperationId ----------e0ee6e66-9167-cfa7-a746-7casb9 OperationStatus --------------Succeeded Part 6 - Create the gateway Before running this example, refer to the network configuration file that you downloaded for the exact names that Azure expects to see. The network configuration file contains the values for your classic virtual networks. Sometimes the names for classic VNets are changed in the network configuration file when creating classic VNet settings in the Azure portal due to the differences in the deployment models. For example, if you used the Azure portal to create a classic VNet named 'Classic VNet' and created it in a resource group named 'ClassicRG', the name that is contained in the network configuration file is converted to 'Group ClassicRG Classic VNet'. When specifying the name of a VNet that contains spaces, use quotation marks around the value. Use the following example to create a dynamic routing gateway: New-AzureVNetGateway -VNetName ClassicVNet -GatewayType DynamicRouting You can check the status of the gateway by using the Get-AzureVNetGateway cmdlet. Section 2: Configure the RM VNet gateway To create a VPN gateway for the RM VNet, follow the following instructions. Don't start the steps until after you have retrieved the public IP address for the classic VNet's gateway. 1. Log in to your Azure account in the PowerShell console. The following cmdlet prompts you for the login credentials for your Azure Account. After logging in, your account settings are downloaded so that they are available to Azure PowerShell. Login-AzureRmAccount Get a list of your Azure subscriptions if you have more than one subscription. Get-AzureRmSubscription Specify the subscription that you want to use. Select-AzureRmSubscription -SubscriptionName "Name of subscription" 2. Create a local network gateway. In a virtual network, the local network gateway typically refers to your onpremises location. In this case, the local network gateway refers to your Classic VNet. Give it a name by which Azure can refer to it, and also specify the address space prefix. Azure uses the IP address prefix you specify to identify which traffic to send to your on-premises location. If you need to adjust the information here later, before creating your gateway, you can modify the values and run the sample again. -Name is the name you want to assign to refer to the local network gateway. -AddressPrefix is the Address Space for your classic VNet. -GatewayIpAddress is the public IP address of the classic VNet's gateway. Be sure to change the following sample to reflect the correct IP address. New-AzureRmLocalNetworkGateway -Name ClassicVNetLocal ` -Location "West US" -AddressPrefix "10.0.0.0/24" ` -GatewayIpAddress "n.n.n.n" -ResourceGroupName RG1 3. Request a public IP address to be allocated to the virtual network gateway for the Resource Manager VNet. You can't specify the IP address that you want to use. The IP address is dynamically allocated to the virtual network gateway. However, this does not mean the IP address changes. The only time the virtual network gateway IP address changes is when the gateway is deleted and recreated. It doesn't change across resizing, resetting, or other internal maintenance/upgrades of the gateway. In this step, we also set a variable that is used in a later step. $ipaddress = New-AzureRmPublicIpAddress -Name gwpip ` -ResourceGroupName RG1 -Location 'EastUS' ` -AllocationMethod Dynamic 4. Verify that your virtual network has a gateway subnet. If no gateway subnet exists, add one. Make sure the gateway subnet is named GatewaySubnet. 5. Retrieve the subnet used for the gateway by running the following command. In this step, we also set a variable to be used in the next step. -Name is the name of your Resource Manager VNet. -ResourceGroupName is the resource group that the VNet is associated with. The gateway subnet must already exist for this VNet and must be named GatewaySubnet to work properly. $subnet = Get-AzureRmVirtualNetworkSubnetConfig -Name GatewaySubnet ` -VirtualNetwork (Get-AzureRmVirtualNetwork -Name RMVNet -ResourceGroupName RG1) 6. Create the gateway IP addressing configuration. The gateway configuration defines the subnet and the public IP address to use. Use the following sample to create your gateway configuration. In this step, the -SubnetId and -PublicIpAddressId parameters must be passed the id property from the subnet, and IP address objects, respectively. You can't use a simple string. These variables are set in the step to request a public IP and the step to retrieve the subnet. $gwipconfig = New-AzureRmVirtualNetworkGatewayIpConfig ` -Name gwipconfig -SubnetId $subnet.id ` -PublicIpAddressId $ipaddress.id 7. Create the Resource Manager virtual network gateway by running the following command. The must be RouteBased. It can take 45 minutes or more for the gateway to create. -VpnType New-AzureRmVirtualNetworkGateway -Name RMGateway -ResourceGroupName RG1 ` -Location "EastUS" -GatewaySKU Standard -GatewayType Vpn ` -IpConfigurations $gwipconfig ` -EnableBgp $false -VpnType RouteBased 8. Copy the public IP address once the VPN gateway has been created. You use it when you configure the local network settings for your Classic VNet. You can use the following cmdlet to retrieve the public IP address. The public IP address is listed in the return as IpAddress. Get-AzureRmPublicIpAddress -Name gwpip -ResourceGroupName RG1 Section 3: Modify the classic VNet local site settings In this section, you work with the classic VNet. You replace the placeholder IP address that you used when specifying the local site settings that will be used to connect to the Resource Manager VNet gateway. 1. Export the network configuration file. Get-AzureVNetConfig -ExportToFile C:\AzureNet\NetworkConfig.xml 2. Using a text editor, modify the value for VPNGatewayAddress. Replace the placeholder IP address with the public IP address of the Resource Manager gateway and then save the changes. <VPNGatewayAddress>13.68.210.16</VPNGatewayAddress> 3. Import the modified network configuration file to Azure. Set-AzureVNetConfig -ConfigurationPath C:\AzureNet\NetworkConfig.xml Section 4: Create a connection between the gateways Creating a connection between the gateways requires PowerShell. You may need to add your Azure Account to use the classic version of the PowerShell cmdlets. To do so, use Add-AzureAccount. 1. In the PowerShell console, set your shared key. Before running the cmdlets, refer to the network configuration file that you downloaded for the exact names that Azure expects to see. When specifying the name of a VNet that contains spaces, use single quotation marks around the value. In following example, -VNetName is the name of the classic VNet and -LocalNetworkSiteName is the name you specified for the local network site. The -SharedKey is a value that you generate and specify. In the example, we used 'abc123', but you can generate and use something more complex. The important thing is that the value you specify here must be the same value that you specify in the next step when you create your connection. The return should show Status: Successful. Set-AzureVNetGatewayKey -VNetName ClassicVNet ` -LocalNetworkSiteName RMVNetLocal -SharedKey abc123 2. Create the VPN connection by running the following commands: Set the variables. $vnet01gateway = Get-AzureRMLocalNetworkGateway -Name ClassicVNetLocal -ResourceGroupName RG1 $vnet02gateway = Get-AzureRmVirtualNetworkGateway -Name RMGateway -ResourceGroupName RG1 Create the connection. Notice that the -ConnectionType is IPsec, not Vnet2Vnet. New-AzureRmVirtualNetworkGatewayConnection -Name RM-Classic -ResourceGroupName RG1 ` -Location "East US" -VirtualNetworkGateway1 ` $vnet02gateway -LocalNetworkGateway2 ` $vnet01gateway -ConnectionType IPsec -RoutingWeight 10 -SharedKey 'abc123' Section 5: Verify your connections To verify the connection from your classic VNet to your Resource Manager VNet PowerShell You can use the Get-AzureVNetConnection to verify the connection for a classic virtual network gateway. 1. Use the following cmdlet example, configuring the values to match your own. The name of the virtual network must be in quotes if it contains spaces. Get-AzureVNetConnection "Group ClassicRG ClassicVNet" 2. After the cmdlet has finished, view the values. In the example below, the Connectivity State shows as 'Connected' and you can see ingress and egress bytes. ConnectivityState EgressBytesTransferred IngressBytesTransferred LastConnectionEstablished LastEventID LastEventMessage from Connecting to LastEventTimeStamp LocalNetworkSiteName : : : : : : Connected 181664 182080 1/7/2016 12:40:54 AM 24401 The connectivity state for the local network site 'RMVNetLocal' changed Connected. : 1/7/2016 12:40:54 AM : RMVNetLocal Azure portal In the Azure portal, you can view the connection status for a classic VNet VPN Gateway by navigating to the connection. The following steps show one way to navigate to your connection and verify. 1. In the Azure portal, click All resources and navigate to your classic virtual network. 2. On the virtual network blade, click Overview to access the VPN connections section of the blade. 3. On the VPN connections graphic, click the site. 4. On the Site-to-site VPN connections blade, view the information about your site. 5. To view more information about the connection, click the name of the connection to open the Site-to-site VPN Connection blade. To verify the connection from your Resource Manager VNet to your classic VNet PowerShell You can verify that your connection succeeded by using the 'Get-AzureRmVirtualNetworkGatewayConnection' cmdlet, with or without '-Debug'. 1. Use the following cmdlet example, configuring the values to match your own. If prompted, select 'A' in order to run 'All'. In the example, '-Name' refers to the name of the connection that you created and want to test. Get-AzureRmVirtualNetworkGatewayConnection -Name MyGWConnection -ResourceGroupName MyRG 2. After the cmdlet has finished, view the values. In the example below, the connection status shows as 'Connected' and you can see ingress and egress bytes. "connectionStatus": "Connected", "ingressBytesTransferred": 33509044, "egressBytesTransferred": 4142431 Azure portal In the Azure portal, you can view the connection status of a Resource Manager VPN Gateway by navigating to the connection. The following steps show one way to navigate to your connection and verify. 1. In the Azure portal, click All resources and navigate to your virtual network gateway. 2. On the blade for your virtual network gateway, click Connections. You can see the status of each connection. 3. Click the name of the connection that you want to verify to open Essentials. In Essentials, you can view more information about your connection. The Status is 'Succeeded' and 'Connected' when you have made a successful connection. VNet-to-VNet FAQ Does Azure charge for traffic between VNets? VNet-to-VNet traffic within the same region is free for both directions. Cross region VNet-to-VNet egress traffic is charged with the outbound inter-VNet data transfer rates based on the source regions. Refer to the pricing page for details. Does VNet-to -VNet traffic travel across the Internet? No. VNet-to-VNet traffic travels across the Microsoft Azure backbone, not the Internet. Is VNet-to -VNet traffic secure? Yes, it is protected by IPsec/IKE encryption. Do I need a VPN device to connect VNets together? No. Connecting multiple Azure virtual networks together doesn't require a VPN device unless cross-premises connectivity is required. Do my VNets need to be in the same region? No. The virtual networks can be in the same or different Azure regions (locations). Can I use VNet-to -VNet along with multi-site connections? Yes. Virtual network connectivity can be used simultaneously with multi-site VPNs. How many on-premises sites and virtual networks can one virtual network connect to? See Gateway requirements table. Can I use VNet-to -VNet to connect VMs or cloud services outside of a VNet? No. VNet-to-VNet supports connecting virtual networks. It does not support connecting virtual machines or cloud services that are not in a virtual network. Can a cloud service or a load balancing endpoint span VNets? No. A cloud service or a load balancing endpoint can't span across virtual networks, even if they are connected together. Can I used a PolicyBased VPN type for VNet-to -VNet or Multi-Site connections? No. VNet-to-VNet and Multi-Site connections require Azure VPN gateways with RouteBased (previously called Dynamic Routing) VPN types. Can I connect a VNet with a RouteBased VPN Type to another VNet with a PolicyBased VPN type? No, both virtual networks MUST be using route-based (previously called Dynamic Routing) VPNs. Do VPN tunnels share bandwidth? Yes. All VPN tunnels of the virtual network share the available bandwidth on the Azure VPN gateway and the same VPN gateway uptime SLA in Azure. Are redundant tunnels supported? Redundant tunnels between a pair of virtual networks are supported when one virtual network gateway is configured as active-active. Can I have overlapping address spaces for VNet-to -VNet configurations? No. You can't have overlapping IP address ranges. Can there be overlapping address spaces among connected virtual networks and on-premises local sites? No. You can't have overlapping IP address ranges. Get started creating an Internet facing load balancer (classic) in the Azure classic portal 1/24/2017 • 3 min to read • Edit Online An Azure load balancer is a Layer-4 (TCP, UDP) load balancer. The load balancer provides high availability by distributing incoming traffic among healthy service instances in cloud services or virtual machines in a load balancer set. Azure Load Balancer can also present those services on multiple ports, multiple IP addresses, or both. You can configure a load balancer to: Load balance incoming Internet traffic to virtual machines (VMs). We refer to a load balancer in this scenario as an Internet-facing load balancer. Load balance traffic between VMs in a virtual network (VNet), between VMs in cloud services, or between onpremises computers and VMs in a cross-premises virtual network. We refer to a load balancer in this scenario as an internal load balancer (ILB). Forward external traffic to a specific VM instance. IMPORTANT Before you work with Azure resources, it's important to understand that Azure currently has two deployment models: Azure Resource Manager and classic. Make sure you understand deployment models and tools before you work with any Azure resource. You can view the documentation for different tools by clicking the tabs at the top of this article. This article covers the classic deployment model. You can also Learn how to create an Internet facing load balancer using Azure Resource Manager. The following tasks will be done in this scenario: Create a load balancer that receives network traffic on port 80 and send load-balanced traffic to virtual machines "web1" and "web2" Create NAT rules for remote desktop access/SSH for virtual machines behind the load balancer Create health probes Set up an Internet-facing load balancer for virtual machines In order to load balance network traffic from the Internet across the virtual machines of a cloud service, you must create a load-balanced set. This procedure assumes that you have already created the virtual machines and that they are all within the same cloud service. To configure a load-balanced set for virtual machines 1. In the Azure classic portal, click Virtual Machines, and then click the name of a virtual machine in the loadbalanced set. 2. Click Endpoints, and then click Add. 3. On the Add an endpoint to a virtual machine page, click the right arrow. 4. On the Specify the details of the endpoint page: In Name, type a name for the endpoint or select the name from the list of predefined endpoints for common protocols. In Protocol, select the protocol required by the type of endpoint, either TCP or UDP, as needed. In Public Port and Private Port, type the port numbers that you want the virtual machine to use, as needed. You can use the private port and firewall rules on the virtual machine to redirect traffic in a way that is appropriate for your application. The private port can be the same as the public port. For example, for an endpoint for web (HTTP) traffic, you could assign port 80 to both the public and private port. 5. Select Create a load-balanced set, and then click the right arrow. 6. On the Configure the load-balanced set page, type a name for the load-balanced set, and then assign the values for probe behavior of the Azure Load Balancer. The Load Balancer uses probes to determine if the virtual machines in the load-balanced set are available to receive incoming traffic. 7. Click the check mark to create the load-balanced endpoint. You will see Yes in the Load-balanced set name column of the Endpoints page for the virtual machine. 8. In the portal, click Virtual Machines, click the name of an additional virtual machine in the load-balanced set, click Endpoints, and then click Add. 9. On the Add an endpoint to a virtual machine page, click Add endpoint to an existing load-balanced set, select the name of the load-balanced set, and then click the right arrow. 10. On the Specify the details of the endpoint page, type a name for the endpoint, and then click the check mark. For the additional virtual machines in the load-balanced set, repeat steps 8-10. Next steps Get started configuring an internal load balancer Configure a load balancer distribution mode Configure idle TCP timeout settings for your load balancer How to create NSGs (classic) in PowerShell 4/27/2017 • 5 min to read • Edit Online You can use an NSG to control traffic to one or more virtual machines (VMs), role instances, network adapters (NICs), or subnets in your virtual network. An NSG contains access control rules that allow or deny traffic based on traffic direction, protocol, source address and port, and destination address and port. The rules of an NSG can be changed at any time, and changes are applied to all associated instances. For more information about NSGs, visit what is an NSG. IMPORTANT Before you work with Azure resources, it's important to understand that Azure currently has two deployment models: Azure Resource Manager and classic. Make sure you understand deployment models and tools before you work with any Azure resource. You can view the documentation for different tools by clicking the tabs at the top of this article. This article covers the classic deployment model. You can also create NSGs in the Resource Manager deployment model. Scenario To better illustrate how to create NSGs, this document will use the scenario below. In this scenario you will create an NSG for each subnet in the TestVNet virtual network, as described below: NSG-FrontEnd. The front end NSG will be applied to the FrontEnd subnet, and contain two rules: rdp-rule. This rule will allow RDP traffic to the FrontEnd subnet. web-rule. This rule will allow HTTP traffic to the FrontEnd subnet. NSG-BackEnd. The back end NSG will be applied to the BackEnd subnet, and contain two rules: sql-rule. This rule allows SQL traffic only from the FrontEnd subnet. web-rule. This rule denies all internet bound traffic from the BackEnd subnet. The combination of these rules create a DMZ-like scenario, where the back end subnet can only receive incoming traffic for SQL from the front end subnet, and has no access to the Internet, while the front end subnet can communicate with the Internet, and receive incoming HTTP requests only. The sample PowerShell commands below expect a simple environment already created based on the scenario above. If you want to run the commands as they are displayed in this document, first build the test environment by creating a VNet. How to create the NSG for the front-end subnet To create an NSG named named NSG-FrontEnd based on the scenario above, follow the steps below: 1. If you have never used Azure PowerShell, see How to Install and Configure Azure PowerShell and follow the instructions all the way to the end to sign into Azure and select your subscription. 2. Create a network security group named NSG-FrontEnd. New-AzureNetworkSecurityGroup -Name "NSG-FrontEnd" -Location uswest ` -Label "Front end subnet NSG" Expected output: Name Location NSG-FrontEnd West US Label Front end subnet NSG 3. Create a security rule allowing access from the Internet to port 3389. Get-AzureNetworkSecurityGroup -Name "NSG-FrontEnd" ` | Set-AzureNetworkSecurityRule -Name rdp-rule ` -Action Allow -Protocol TCP -Type Inbound -Priority 100 ` -SourceAddressPrefix Internet -SourcePortRange '*' ` -DestinationAddressPrefix '*' -DestinationPortRange '3389' Expected output: Name Location Label Rules : NSG-FrontEnd : Central US : Front end subnet NSG : Type: Inbound Destination Name Protocol Priority Action Source Address Source Port Destination Prefix Range Address Prefix Port * * 3389 Range rdp-rule 100 Allow INTERNET ALLOW VNET INBOUND 65000 Allow VIRTUAL_NETWORK * VIRTUAL_NETWORK * ALLOW AZURE LOAD 65001 Allow AZURE_LOADBALAN * * * BALANCER INBOUND DENY ALL INBOUND 65500 Deny CER * * * TCP * * * * Type: Outbound Destination Name Protocol Priority Action Source Address Source Port Destination Prefix Address Prefix Range Port Range ALLOW VNET OUTBOUND 65000 Allow VIRTUAL_NETWORK * VIRTUAL_NETWORK * ALLOW INTERNET 65001 Allow * * INTERNET * OUTBOUND DENY ALL OUTBOUND 65500 Deny * * * * * * * 4. Create a security rule allowing access from the Internet to port 80. Get-AzureNetworkSecurityGroup -Name "NSG-FrontEnd" ` | Set-AzureNetworkSecurityRule -Name web-rule ` -Action Allow -Protocol TCP -Type Inbound -Priority 200 ` -SourceAddressPrefix Internet -SourcePortRange '*' ` -DestinationAddressPrefix '*' -DestinationPortRange '80' Expected output: Name Location Label Rules : NSG-FrontEnd : Central US : Front end subnet NSG : Type: Inbound Destination Name Protocol Priority Action Source Address Source Port Destination Prefix Range Address Prefix Port Range rdp-rule 100 Allow INTERNET * * 3389 web-rule 200 Allow INTERNET * * 80 ALLOW VNET INBOUND 65000 Allow VIRTUAL_NETWORK * VIRTUAL_NETWORK * ALLOW AZURE LOAD 65001 Allow AZURE_LOADBALAN * * * BALANCER INBOUND DENY ALL INBOUND 65500 Deny CER * * * TCP TCP * * * * Type: Outbound Destination Name Protocol Priority Action Source Address Source Port Destination Prefix Address Prefix Range Port Range ALLOW VNET OUTBOUND 65000 Allow VIRTUAL_NETWORK * VIRTUAL_NETWORK * ALLOW INTERNET 65001 Allow * * INTERNET * OUTBOUND DENY ALL OUTBOUND 65500 Deny * * * * * * * How to create the NSG for the back end subnet 1. Create a network security group named NSG-BackEnd. New-AzureNetworkSecurityGroup -Name "NSG-BackEnd" -Location uswest ` -Label "Back end subnet NSG" Expected output: Name Location NSG-BackEnd West US Label Back end subnet NSG 2. Create a security rule allowing access from the front end subnet to port 1433 (default port used by SQL Server). Get-AzureNetworkSecurityGroup -Name "NSG-FrontEnd" ` | Set-AzureNetworkSecurityRule -Name rdp-rule ` -Action Allow -Protocol TCP -Type Inbound -Priority 100 ` -SourceAddressPrefix 192.168.1.0/24 -SourcePortRange '*' ` -DestinationAddressPrefix '*' -DestinationPortRange '1433' Expected output: Name Location Label Rules : NSG-BackEnd : Central US : Back end subnet NSG : Type: Inbound Destination Name Protocol Priority Action Source Address Source Port Destination Prefix Address Prefix Port 1433 Range Range fe-rule 100 Allow 192.168.1.0/24 * * ALLOW VNET INBOUND 65000 Allow VIRTUAL_NETWORK * VIRTUAL_NETWORK * ALLOW AZURE LOAD 65001 Allow AZURE_LOADBALAN * * * BALANCER INBOUND DENY ALL INBOUND 65500 Deny CER * * * TCP * * * * Type: Outbound Destination Name Protocol Priority Action Source Address Source Port Destination Prefix Address Prefix Range Port Range ALLOW VNET OUTBOUND 65000 Allow VIRTUAL_NETWORK * VIRTUAL_NETWORK * ALLOW INTERNET 65001 Allow * * INTERNET * OUTBOUND DENY ALL OUTBOUND 65500 Deny * * * * * * * 3. Create a security rule blocking access from the subnet to the Internet. Get-AzureNetworkSecurityGroup -Name "NSG-BackEnd" ` | Set-AzureNetworkSecurityRule -Name block-internet ` -Action Deny -Protocol '*' -Type Outbound -Priority 200 ` -SourceAddressPrefix '*' -SourcePortRange '*' ` -DestinationAddressPrefix Internet -DestinationPortRange '*' Expected output: Name Location Label Rules : NSG-BackEnd : Central US : Back end subnet NSG : Type: Inbound Destination Name Protocol Priority Action Source Address Source Port Destination Prefix Address Prefix Port 1433 Range Range fe-rule 100 Allow 192.168.1.0/24 * * ALLOW VNET INBOUND 65000 Allow VIRTUAL_NETWORK * VIRTUAL_NETWORK * ALLOW AZURE LOAD 65001 Allow AZURE_LOADBALAN * * * BALANCER INBOUND DENY ALL INBOUND 65500 Deny CER * * * TCP * * * * Type: Outbound Destination Name Protocol Priority Action Source Address Source Port Destination Prefix Range Address Prefix Port Deny * * INTERNET * ALLOW VNET OUTBOUND 65000 Allow VIRTUAL_NETWORK * VIRTUAL_NETWORK * ALLOW INTERNET 65001 Allow * * INTERNET * OUTBOUND DENY ALL OUTBOUND 65500 Deny * * * * Range block-internet 200 * * * * Create a custom virtual machine running Windows using the classic deployment model 3/27/2017 • 3 min to read • Edit Online IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. A custom virtual machine simply means a virtual machine that you create using a Featured app from the Marketplace because it does much of the work for you. Yet, you can still make configuration choices that include the following items: Connecting the virtual machine to a virtual network. Installing the Azure Virtual Machine Agent and Azure Virtual Machine Extensions, such as for antimalware. Adding the virtual machine to existing cloud services. Adding the virtual machine to an existing Storage account. Adding the virtual machine to an availability set. IMPORTANT If you want your virtual machine to use a virtual network, make sure that you specify the virtual network when you create the virtual machine. Two benefits of using a virtual network are connecting directly to the virtual machine and to set up cross-premises connections. A virtual machine can be configured to join a virtual network only when you create the virtual machine. For details on virtual networks, see Azure Virtual Network overview. To create the virtual machine 1. Sign in to the Azure portal. 2. Starting in the upper left, click New > Compute > Windows Server 2016 Datacenter. 3. On the Windows Server 2016 Datacenter, select the Classic deployment model. Click Create. 1. Basics blade The Basics blade requests administrative information for the virtual machine. 1. Enter a Name for the virtual machine. In the example, HeroVM is the name of the virtual machine. The name must be 1-15 characters long and it cannot contain special characters. 2. Enter a User name and a strong Password that are used to create a local account on the VM. The local account is used to sign in to and manage the VM. In the example, azureuser is the user name. The password must be 8-123 characters long and meet three out of the four following complexity requirements: one lower case character, one upper case character, one number, and one special character. See more about username and password requirements. 3. The Subscription is optional. One common setting is "Pay-As-You-Go". 4. Select an existing Resource group or type the name for a new one. In the example, HeroVMRG is the name of the resource group. 5. Select an Azure datacenter Location where you want the VM to run. In the example, East US is the location. 6. When you are done, click Next to continue to the next blade. 2. Size blade The Size blade identifies the configuration details of the VM, and lists various choices that include OS, number of processors, disk storage type, and estimated monthly usage costs. Choose a VM size, and then click Select to continue. In this example, DS1_V2 Standard is the VM size. 3. Settings blade The Settings blade requests storage and network options. You can accept the default settings. Azure creates appropriate entries where necessary. If you selected a virtual machine size that supports it, you can try Azure Premium Storage by selecting Premium (SSD) in Disk type. When you're done making changes, click OK. 4. Summary blade The Summary blade lists the settings specified in the previous blades. Click OK when you're ready to make the image. After the virtual machine is created, the portal lists the new virtual machine under All resources, and displays a tile of the virtual machine on the dashboard. The corresponding cloud service and storage account also are created and listed. Both the virtual machine and cloud service are started automatically and their status is listed as Running. Next steps You can also create a custom virtual machine running Linux. Create a Windows virtual machine with PowerShell and the classic deployment model 4/27/2017 • 9 min to read • Edit Online IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. Learn how to perform these steps using the Resource Manager model. These steps show you how to customize a set of Azure PowerShell commands that create and preconfigure a Windows-based Azure virtual machine by using a building block approach. You can use this process to quickly create a command set for a new Windows-based virtual machine and expand an existing deployment or to create multiple command sets that quickly build out a custom dev/test or IT pro environment. These steps follow a fill-in-the-blanks approach for creating Azure PowerShell command sets. This approach can be useful if you are new to PowerShell or you just want to know what values to specify for successful configuration. Advanced PowerShell users can take the commands and substitute their own values for the variables (the lines beginning with "$"). If you haven't done so already, use the instructions in How to install and configure Azure PowerShell to install Azure PowerShell on your local computer. Then, open a Windows PowerShell command prompt. Step 1: Add your account 1. 2. 3. 4. At the PowerShell prompt, type Add-AzureAccount and click Enter. Type in the email address associated with your Azure subscription and click Continue. Type in the password for your account. Click Sign in. Step 2: Set your subscription and storage account Set your Azure subscription and storage account by running these commands at the Windows PowerShell command prompt. Replace everything within the quotes, including the < and > characters, with the correct names. $subscr="<subscription name>" $staccount="<storage account name>" Select-AzureSubscription -SubscriptionName $subscr –Current Set-AzureSubscription -SubscriptionName $subscr -CurrentStorageAccountName $staccount You can get the correct subscription name from the SubscriptionName property of the output of the GetAzureSubscription command. You can get the correct storage account name from the Label property of the output of the Get-AzureStorageAccount command after you run the Select-AzureSubscription command. Step 3: Determine the ImageFamily Next, you need to determine the ImageFamily or Label value for the specific image corresponding to the Azure virtual machine you want to create. You can get the list of available ImageFamily values with this command. Get-AzureVMImage | select ImageFamily -Unique Here are some examples of ImageFamily values for Windows-based computers: Windows Server 2012 R2 Datacenter Windows Server 2008 R2 SP1 Windows Server 2016 Technical Preview 4 SQL Server 2012 SP1 Enterprise on Windows Server 2012 If you find the image you are looking for, open a fresh instance of the text editor of your choice or the PowerShell Integrated Scripting Environment (ISE). Copy the following into the new text file or the PowerShell ISE, substituting the ImageFamily value. $family="<ImageFamily value>" $image=Get-AzureVMImage | where { $_.ImageFamily -eq $family } | sort PublishedDate -Descending | select ExpandProperty ImageName -First 1 In some cases, the image name is in the Label property instead of the ImageFamily value. If you didn't find the image that you are looking for using the ImageFamily property, list the images by their Label property with this command. Get-AzureVMImage | select Label -Unique If you find the right image with this command, open a fresh instance of the text editor of your choice or the PowerShell ISE. Copy the following into the new text file or the PowerShell ISE, substituting the Label value. $label="<Label value>" $image = Get-AzureVMImage | where { $_.Label -eq $label } | sort PublishedDate -Descending | select ExpandProperty ImageName -First 1 Step 4: Build your command set Build the rest of your command set by copying the appropriate set of blocks below into your new text file or the ISE and then filling in the variable values and removing the < and > characters. See the two examples at the end of this article for an idea of the final result. Start your command set by choosing one of these two command blocks (required). Option 1: Specify a virtual machine name and a size. $vmname="<machine name>" $vmsize="<Specify one: Small, Medium, Large, ExtraLarge, A5, A6, A7, A8, A9>" $vm1=New-AzureVMConfig -Name $vmname -InstanceSize $vmsize -ImageName $image Option 2: Specify a name, size, and availability set name. $vmname="<machine name>" $vmsize="<Specify one: Small, Medium, Large, ExtraLarge, A5, A6, A7, A8, A9>" $availset="<set name>" $vm1=New-AzureVMConfig -Name $vmname -InstanceSize $vmsize -ImageName $image -AvailabilitySetName $availset For the InstanceSize values for D-, DS-, or G-series virtual machines, see Virtual Machine and Cloud Service Sizes for Azure. NOTE If you have an Enterprise Agreement with Software Assurance, and intend to take advantage of the Windows Server Hybrid Use Benefit, add the -LicenseType parameter to the New-AzureVMConfig cmdlet, passing the value Windows_Server for the typical use case. Be sure you are using an image you have uploaded; you cannot use a standard image from the Gallery with the Hybrid Use Benefit. Optionally, for a standalone Windows computer, specify the local administrator account and password. $cred=Get-Credential -Message "Type the name and password of the local administrator account." $vm1 | Add-AzureProvisioningConfig -Windows -AdminUsername $cred.Username -Password $cred.GetNetworkCredential().Password Choose a strong password. To check its strength, see Password Checker: Using Strong Passwords. Optionally, to add the Windows computer to an existing Active Directory domain, specify the local administrator account and password, the domain, and the name and password of a domain account. $cred1=Get-Credential –Message "Type the name and password of the local administrator account." $cred2=Get-Credential –Message "Now type the name (not including the domain) and password of an account that has permission to add the machine to the domain." $domaindns="<FQDN of the domain that the machine is joining>" $domacctdomain="<domain of the account that has permission to add the machine to the domain>" $vm1 | Add-AzureProvisioningConfig -AdminUsername $cred1.Username -Password $cred1.GetNetworkCredential().Password -WindowsDomain -Domain $domacctdomain -DomainUserName $cred2.Username DomainPassword $cred2.GetNetworkCredential().Password -JoinDomain $domaindns For additional pre-configuration options for Windows-based virtual machines, see the syntax for the Windows and WindowsDomain parameter sets in Add-AzureProvisioningConfig. Optionally, assign the virtual machine a specific IP address, known as a static DIP. $vm1 | Set-AzureStaticVNetIP -IPAddress <IP address> You can verify that a specific IP address is available with: Test-AzureStaticVNetIP –VNetName <VNet name> –IPAddress <IP address> Optionally, assign the virtual machine to a specific subnet in an Azure virtual network. $vm1 | Set-AzureSubnet -SubnetNames "<name of the subnet>" Optionally, add a single data disk to the virtual machine. $disksize=<size of the disk in GB> $disklabel="<the label on the disk>" $lun=<Logical Unit Number (LUN) of the disk> $hcaching="<Specify one: ReadOnly, ReadWrite, None>" $vm1 | Add-AzureDataDisk -CreateNew -DiskSizeInGB $disksize -DiskLabel $disklabel -LUN $lun -HostCaching $hcaching For an Active Directory domain controller, set $hcaching to "None". Optionally, add the virtual machine to an existing load-balanced set for external traffic. $protocol="<Specify one: tcp, udp>" $localport=<port number of the internal port> $pubport=<port number of the external port> $endpointname="<name of the endpoint>" $lbsetname="<name of the existing load-balanced set>" $probeprotocol="<Specify one: tcp, http>" $probeport=<TCP or HTTP port number of probe traffic> $probepath="<URL path for probe traffic>" $vm1 | Add-AzureEndpoint -Name $endpointname -Protocol $protocol -LocalPort $localport -PublicPort $pubport LBSetName $lbsetname -ProbeProtocol $probeprotocol -ProbePort $probeport -ProbePath $probepath Finally, choose one of these required command blocks for creating the virtual machine. Option 1: Create the virtual machine in an existing cloud service. New-AzureVM –ServiceName "<short name of the cloud service>" -VMs $vm1 The short name of the cloud service is the name that appears in the list of Cloud Services in the Azure classic portal or in the list of Resource Groups in the Azure portal. Option 2: Create the virtual machine in an existing cloud service and virtual network. $svcname="<short name of the cloud service>" $vnetname="<name of the virtual network>" New-AzureVM –ServiceName $svcname -VMs $vm1 -VNetName $vnetname Step 5: Run your command set Review the Azure PowerShell command set you built in your text editor or the PowerShell ISE consisting of multiple blocks of commands from step 4. Ensure that you have specified all the needed variables and that they have the correct values. Also make sure that you have removed all the < and > characters. If you are using a text editor, copy the command set to the clipboard and then right-click your open Windows PowerShell command prompt. This will issue the command set as a series of PowerShell commands and create your Azure virtual machine. Alternately, run the command set in the PowerShell ISE. If you will be creating this virtual machine again or a similar one, you can: Save this command set as a PowerShell script file (*.ps1). Save this command set as an Azure Automation runbook in the Automation section of the Azure classic portal. Examples Here are two examples of using the steps above to build Azure PowerShell command sets that create Windowsbased Azure virtual machines. Example 1 I need a PowerShell command set to create the initial virtual machine for an Active Directory domain controller that: Uses the Windows Server 2012 R2 Datacenter image. Has the name AZDC1. Is a standalone computer. Has an additional data disk of 20 GB. Has the static IP address 192.168.244.4. Is in the BackEnd subnet of the AZDatacenter virtual network. Is in the Azure-TailspinToys cloud service. Here is the corresponding Azure PowerShell command set to create this virtual machine, with blank lines between each block for readability. $family="Windows Server 2012 R2 Datacenter" $image=Get-AzureVMImage | where { $_.ImageFamily -eq $family } | sort PublishedDate -Descending | select ExpandProperty ImageName -First 1 $vmname="AZDC1" $vmsize="Medium" $vm1=New-AzureVMConfig -Name $vmname -InstanceSize $vmsize -ImageName $image $cred=Get-Credential -Message "Type the name and password of the local administrator account." $vm1 | Add-AzureProvisioningConfig -Windows -AdminUsername $cred.Username -Password $cred.GetNetworkCredential().Password $vm1 | Set-AzureSubnet -SubnetNames "BackEnd" $vm1 | Set-AzureStaticVNetIP -IPAddress 192.168.244.4 $disksize=20 $disklabel="DCData" $lun=0 $hcaching="None" $vm1 | Add-AzureDataDisk -CreateNew -DiskSizeInGB $disksize -DiskLabel $disklabel -LUN $lun -HostCaching $hcaching $svcname="Azure-TailspinToys" $vnetname="AZDatacenter" New-AzureVM –ServiceName $svcname -VMs $vm1 -VNetName $vnetname Example 2 I need a PowerShell command set to create a virtual machine for a line-of-business server that: Uses the Windows Server 2012 R2 Datacenter image. Has the name LOB1. Is a member of the corp.contoso.com domain. Has an additional data disk of 200 GB. Is in the FrontEnd subnet of the AZDatacenter virtual network. Is in the Azure-TailspinToys cloud service. Here is the corresponding Azure PowerShell command set to create this virtual machine. $family="Windows Server 2012 R2 Datacenter" $image=Get-AzureVMImage | where { $_.ImageFamily -eq $family } | sort PublishedDate -Descending | select ExpandProperty ImageName -First 1 $vmname="LOB1" $vmsize="Large" $vm1=New-AzureVMConfig -Name $vmname -InstanceSize $vmsize -ImageName $image $cred1=Get-Credential –Message "Type the name and password of the local administrator account." $cred2=Get-Credential –Message "Now type the name (not including the domain) and password of an account that has permission to add the machine to the domain." $domaindns="corp.contoso.com" $domacctdomain="CORP" $vm1 | Add-AzureProvisioningConfig -AdminUsername $cred1.Username -Password $cred1.GetNetworkCredential().Password -WindowsDomain -Domain $domacctdomain -DomainUserName $cred2.Username DomainPassword $cred2.GetNetworkCredential().Password -JoinDomain $domaindns $vm1 | Set-AzureSubnet -SubnetNames "FrontEnd" $disksize=200 $disklabel="LOBData" $lun=0 $hcaching="ReadWrite" $vm1 | Add-AzureDataDisk -CreateNew -DiskSizeInGB $disksize -DiskLabel $disklabel -LUN $lun -HostCaching $hcaching $svcname="Azure-TailspinToys" $vnetname="AZDatacenter" New-AzureVM –ServiceName $svcname -VMs $vm1 -VNetName $vnetname Next steps If you need an OS disk that is larger than 127 GB, you can expand the OS drive. Capture an image of an Azure Windows virtual machine created with the classic deployment model. 3/27/2017 • 2 min to read • Edit Online IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. For Resource Manager model information, see Create a copy Windows VM running in Azure. This article shows you how to capture an Azure virtual machine running Windows so you can use it as an image to create other virtual machines. This image includes the operating system disk and any data disks that are attached to the virtual machine. It doesn't include networking configurations, so you'll need to set up network configurations when you create the other virtual machines that use the image. Azure stores the image under VM images (classic), a Compute service that is listed when you view all the Azure services. This is the same place where any images you've uploaded are stored. For details about images, see About images for virtual machines. Before you begin These steps assume that you've already created an Azure virtual machine and configured the operating system, including attaching any data disks. If you haven't done this yet, see the following articles for information on creating and preparing the virtual machine: Create a virtual machine from an image How to attach a data disk to a virtual machine Make sure the server roles are supported with Sysprep. For more information, see Sysprep Support for Server Roles. WARNING This process deletes the original virtual machine after it's captured. Prior to capturing an image of an Azure virtual machine, it is recommended the target virtual machine be backed up. Azure virtual machines can be backed up using Azure Backup. For details, see Back up Azure virtual machines. Other solutions are available from certified partners. To find out what’s currently available, search the Azure Marketplace. Capture the virtual machine 1. In the Azure portal, Connect to the virtual machine. For instructions, see How to sign in to a virtual machine running Windows Server. 2. Open a Command Prompt window as an administrator. 3. Change the directory to %windir%\system32\sysprep , and then run sysprep.exe. 4. The System Preparation Tool dialog box appears. Do the following: In System Cleanup Action, select Enter System Out-of-Box Experience (OOBE) and make sure that Generalize is checked. For more information about using Sysprep, see How to Use Sysprep: An Introduction. In Shutdown Options, select Shutdown. Click OK. 5. Sysprep shuts down the virtual machine, which changes the status of the virtual machine in the Azure classic portal to Stopped. 6. In the Azure portal, click Virtual Machines (classic) and select the virtual machine you want to capture. The VM images (classic) group is listed under Compute when you view More services. 7. On the command bar, click Capture. The Capture the Virtual Machine dialog box appears. 8. In Image name, type a name for the new image. In Image label, type a label for the new image. 9. Click I've run Sysprep on the virtual machine. This checkbox refers to the actions with Sysprep in steps 3-5. An image must be generalized by running Sysprep before you add a Windows Server image to your set of custom images. 10. Once the capture completes, the new image becomes available in the Marketplace, in the Compute, VM images (classic) container. Next steps The image is ready to be used to create virtual machines. To do this, you'll create a virtual machine by selecting the More services menu item at the bottom of the services menu, then VM images (classic) in the Compute group. For instructions, see Create a virtual machine from an image. Create and upload a Windows Server VHD to Azure 4/27/2017 • 2 min to read • Edit Online This article shows you how to upload your own generalized VM image as a virtual hard disk (VHD) so you can use it to create virtual machines. For more details about disks and VHDs in Microsoft Azure, see About Disks and VHDs for Virtual Machines. IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. You can also upload a virtual machine using the Resource Manager model. Prerequisites This article assumes you have: An Azure subscription - If you don't have one, you can open an Azure account for free. Microsoft Azure PowerShell - You have the Microsoft Azure PowerShell module installed and configured to use your subscription. A .VHD file - supported Windows operating system stored in a .vhd file and attached to a virtual machine. Check to see if the server roles running on the VHD are supported by Sysprep. For more information, see Sysprep Support for Server Roles. IMPORTANT The VHDX format is not supported in Microsoft Azure. You can convert the disk to VHD format using Hyper-V Manager or the Convert-VHD cmdlet. For details, see this blogpost. Step 1: Prep the VHD Before you upload the VHD to Azure, it needs to be generalized by using the Sysprep tool. This prepares the VHD to be used as an image. For details about Sysprep, see How to Use Sysprep: An Introduction. Back up the VM before running Sysprep. From the virtual machine that the operating system was installed to, complete the following procedure: 1. Sign in to the operating system. 2. Open a command prompt window as an administrator. Change the directory to %windir%\system32\sysprep, and then run sysprep.exe . 3. The System Preparation Tool dialog box appears. 4. In the System Preparation Tool, select Enter System Out of Box Experience (OOBE) and make sure that Generalize is checked. 5. In Shutdown Options, select Shutdown. 6. Click OK. Step 2: Create a storage account and a container You need a storage account in Azure so you have a place to upload the .vhd file. This step shows you how to create an account, or get the info you need from an existing account. Replace the variables in ‹ brackets › with your own information. 1. Login Add-AzureAccount 2. Set your Azure subscription. Select-AzureSubscription -SubscriptionName <SubscriptionName> 3. Create a new storage account. The name of the storage account should be unique, 3-24 characters. The name can be any combination of letters and numbers. You also need to specify a location like "East US" New-AzureStorageAccount –StorageAccountName <StorageAccountName> -Location <Location> 4. Set the new storage account as the default. Set-AzureSubscription -CurrentStorageAccountName <StorageAccountName> -SubscriptionName <SubscriptionName> 5. Create a new container. New-AzureStorageContainer -Name <ContainerName> -Permission Off Step 3: Upload the .vhd file Use the Add-AzureVhd to upload the VHD. From the Azure PowerShell window you used in the previous step, type the following command and replace the variables in ‹ brackets › with your own information. Add-AzureVhd -Destination "https://<StorageAccountName>.blob.core.windows.net/<ContainerName>/<vhdName>.vhd" LocalFilePath <LocalPathtoVHDFile> Step 4: Add the image to your list of custom images Use the Add-AzureVMImage cmdlet to add the image to the list of your custom images. Add-AzureVMImage -ImageName <ImageName> -MediaLocation "https://<StorageAccountName>.blob.core.windows.net/<ContainerName>/<vhdName>.vhd" -OS "Windows" Next steps You can now create a custom VM using the image you uploaded. 1 min to read • Edit O nline Create and manage Windows virtual machines in Visual Studio 3/24/2017 • 7 min to read • Edit Online You can create virtual machines in Azure by using Server Explorer in Visual Studio. Create an Azure virtual machine in Server Explorer While you can create a virtual machine in the Azure Management Portal, you can also create a virtual machine in Azure by using commands in Server Explorer. Virtual machines can be used, for example, to provide a front end behind a common load-balanced public endpoint. To create a new virtual machine 1. In Server Explorer, open the Azure node and click Virtual Machines. 2. On the context menu, click Create Virtual Machine. The Create a New Virtual Machine wizard appears. 3. On the Choose a Subscription page, select a subscription to use when creating the virtual machine and then click Next. If you aren’t signed in to Azure, click Sign In to sign in. Then, select your Azure subscription in the dropdown list box if it’s not already selected. 4. On the Select a Virtual Machine Image page, select an image type in the Image type dropdown list box, and then select a virtual machine images in the Image name list box. When you're done, click Next. You can choose the following image types. Public Images lists virtual machine images of operating systems and server software such as Windows Server and SQL Server. MSDN Images lists virtual machine images of software available to MSDN subscribers, such as Visual Studio and Microsoft Dynamics. Private Images lists specialized and generalized virtual machine images that you've created. To learn about specialized and generalized virtual machines, see VM Image. See How to Capture a Windows Virtual Machine to Use as a Template for information about how to turn a virtual machine into a template that you can use to quickly create new pre-configured virtual machines. You can click a virtual machine image name to see information about the image on the right side of the page. NOTE You can't add virtual machine images to the Public Images or MSDN Images lists because they are readonly. All virtual machines that you create are added to the Private Images list. If you're an MSDN subscriber with a Visual Studio-level subscription, you can create a pre-built Azure virtual machine that contains Visual Studio, as well as several other images. For more information, see Create a Virtual Machine in Visual Studio by Using Images Visual Studio 2013 Gallery image for MSDN subscribers and MSDN subscriptions.| 5. On the Virtual Machine Basic Settings page, enter a machine name and then add the specifications for the virtual machine, including the size, and a user name and password. When you're done, click Next. You’ll use the new name and password to log into the machine using remote desktop, so it’s a good idea to write them down in case you forget. After you create an Azure virtual machine in Visual Studio, you can change its size and other settings in the Azure Management Portal. NOTE If you choose larger sizes for the virtual machine, extra charges may apply. See Virtual Machines Pricing Details for more information. 6. Virtual machines created in Visual Studio require a cloud service. On the Cloud Service Settings page, select a cloud service for the virtual machine, or click in the dropdown list if you don’t already have a cloud service or want to use a new one. A storage account is also required, so choose a storage account (or create a new storage account) in the Storage account dropdown list box. See Introduction to Microsoft Azure Storage for more information. 7. If you want to specify a virtual network (which is optional), select it in the Virtual Network and Subnet dropdown list boxes. Virtual machines that are members of an availability set are deployed to different fault domains. See Azure Virtual Network for more information. 8. If you want your virtual machine to belong to an availability set (also optional), select the Specify an availability set check box and then choose an availability set in the dropdown list box. When you're done, choose the Next button. Adding your virtual machine to an availability set helps your application stay available during network failures, local disk hardware failures, and any planned downtime. You need to use the Azure Management Portal to create virtual networks, subnets, and availability sets. See Manage the Availability of Virtual Machines for more information. 9. On the Endpoints page, specify the public endpoints that you want available to users of your virtual machine. For example, you might choose to enable HTTP (Port 80) in addition to the Remote Desktop and PowerShell endpoints, which are enabled by default. To add an endpoint, choose one in the Port Name dropdown list box and then choose the Add button. To remove an endpoint, choose the red X next to the name in the endpoints list. The endpoints that are available depend on the cloud service you selected for your virtual machine. See Azure Service Endpoints for more information. NOTE Enabling public endpoints makes services on your virtual machine available to the internet. Be sure to install and properly configure the endpoints and services on your virtual machine, such as setting access control lists (ACLs) for the endpoints. See How to Set Up Endpoints to a Virtual Machine for more information. 10. After you’re done configuring the virtual machine settings, choose the Create button to create the virtual machine. As Azure creates the virtual machine, the Azure Activity Log shows the progress of the virtual machine creation operation. To view only virtual machine information, choose the Virtual Machines tab in the Azure Activity Log. If the operation completes successfully, the new virtual machine appears under the Virtual Machines node in Server Explorer. You can log into it by clicking the Connect using Remote Desktop shortcut. Manage your virtual machines On the virtual machine configuration page, in addition to shutting down, connecting, refreshing, and adding checkpoints to the selected virtual machine, you can also view or change settings for the virtual machine. You can: Change the virtual machine size. Select the availability set to use with the virtual machine. Add, remove, or change settings for public endpoints. Add, remove, or configure virtual machine extensions. View information about the disks associated with the virtual machine. View or change virtual machine settings 1. In Server Explorer, choose your virtual machine in the Azure Virtual Machines node. 2. On the shortcut menu, choose Configure to view the virtual machine configuration page. 3. View the virtual machine information or change it. Save or restore the status of your virtual machine As you configure your virtual machine and install software on it, it's a good idea to regularly save your progress by creating virtual machine checkpoints. A checkpoint is a snapshot, or image, of the current state of your virtual machine. If something goes wrong with the virtual machine, or you want to reconfigure the virtual machine, you can save time by restoring it to a previous checkpoint state rather than starting over from scratch. To create a virtual machine checkpoint 1. In Server Explorer, choose your virtual machine in the Azure Virtual Machines node. 2. On the shortcut menu, choose Configure to view the virtual machine configuration page. 3. On the configuration page, choose the Capture Image button. The Capture Virtual Machine dialog appears. 4. Provide an image label and description. A default label and description are provided, but you can overwrite them with your own if you like. 5. If you have already run Sysprep on this virtual machine, select the I have run Sysprep on the virtual machine box. Sysprep is a tool that, among other things, removes systems-specific data from the virtual machine’s version of Windows, making it template that others can use. See How to Capture a Windows Virtual Machine to Use as a Template for more information. Back up the VM before running Sysprep. 6. After you’re done configuring the capture settings, choose the Capture button to create the checkpoint. As Azure creates the checkpoint, the Azure Activity Log shows the progress of the operation. When the checkpoint operation completes, you’ll see it in the Azure Activity Log. To manage virtual machine checkpoints To restore a virtual machine to a previously saved state Follow the steps outlined in Step-by-Step: Perform Cloud Restores of Microsoft Azure Virtual Machines using PowerShell - Part 2. To delete a checkpoint 1. Go to the Azure Management Portal. 2. On the virtual machine configuration page, choose the Images tab at the top of the page. 3. Choose the checkpoint you want to delete, and then choose the Delete button at the bottom of the page. Shut down your virtual machine 1. In Server Explorer, choose the virtual machine you want to shut down in the Azure Virtual Machines node. 2. On the shortcut menu, either choose the Shutdown command, or choose Configure to view the virtual machine configuration page, and then choose the Shutdown button. Next steps To learn more about creating virtual machines, see Create a Virtual Machine Running Linux and Create a virtual machine running Windows in the Azure preview portal. Creating a virtual machine for a web application with Visual Studio 3/24/2017 • 2 min to read • Edit Online IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. When you create a web application project for Azure, you can provision a virtual machine in Azure. You can then configure the virtual machine with additional software, or use the virtual machine for diagnostic or debugging purposes. To create a virtual machine when you create a web application, follow these steps: 1. In Visual Studio, click File > New > Project > Web, and then choose ASP.NET Web Application (under the Visual C# or Visual Basic nodes). 2. In the New ASP.NET Project dialog box, select the type of web application you want, and in the Azure section of the dialog box (in the lower-right corner), make sure that the Host in the cloud check box is selected (this check box is labeled Create remote resources in some installations). 3. For this example, in the drop-down list under Microsoft Azure, choose Virtual Machine (v1), and then click the OK button. 4. Sign in to Azure if you're prompted. The Create Virtual Machine dialog box appears. 5. In the DNS name box, enter a name for the virtual machine. The DNS name must be unique in Azure. If the name you entered isn't available, a red exclamation point appears. 6. In the Image list, choose the image you want to base the virtual machine on. You can choose any of the standard Azure virtual machine images or your image that you've uploaded to Azure. 7. Leave the Enable IIS and Web Deploy check box selected unless you plan to install a different web server. You won't be able to publish from Visual Studio if you disable Web Deploy. You can add IIS and Web Deploy to any of the packaged Windows Server images, including your own custom images. 8. In the Size list, choose the size of the virtual machine. 9. Specify the sign-in credentials for this virtual machine. Make a note of them, because you'll need them to access the machine through Remote Desktop. 10. In the Location list, choose the region to host the virtual machine. 11. Click the OK button to start creating the virtual machine. You can follow the progress of the operation in the Output window. 12. When the virtual machine is provisioned, published scripts are created in a PublishScripts node in your solution. The published script runs and provisions a virtual machine in Azure. The Output window shows the status. The script performs the following actions to set up the virtual machine: Creates the virtual machine if it doesn't already exist. Creates a storage account with a name that begins with storage account in the specified region. devtest , but only if there isn't already such a Creates a cloud service as a container for the virtual machine, and creates a web role for the web application. Configures Web Deploy on the virtual machine. Configures IIS and ASP.NET on the virtual machine. 13. (Optional) You can connect to the new virtual machine. In Server Explorer, expand the Virtual Machines node, choose the node for the virtual machine you created, and on its shortcut menu, choose Connect with Remote Desktop. Alternatively, in Cloud Explorer you can choose Open in Portal on the shortcut menu and connect to the virtual machine there. Next steps If you want to customize the published scripts you created, read more in-depth information at Using Windows PowerShell Scripts to Publish to Dev and Test Environments. How to run a compute-intensive task in Java on a virtual machine 4/26/2017 • 12 min to read • Edit Online IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. With Azure, you can use a virtual machine to handle compute-intensive tasks. For example, a virtual machine can handle tasks and deliver results to client machines or mobile applications. After reading this article, you will have an understanding of how to create a virtual machine that runs a compute-intensive Java application that can be monitored by another Java application. This tutorial assumes you know how to create Java console applications, can import libraries to your Java application, and can generate a Java archive (JAR). No knowledge of Microsoft Azure is assumed. You will learn: How to create a virtual machine with a Java Development Kit (JDK) already installed. How to remotely log in to your virtual machine. How to create a service bus namespace. How to create a Java application that performs a compute-intensive task. How to create a Java application that monitors the progress of the compute-intensive task. How to run the Java applications. How to stop the Java applications. This tutorial will use the Traveling Salesman Problem for the compute-intensive task. The following is an example of the Java application running the compute-intensive task. The following is an example of the Java application monitoring the compute-intensive task. NOTE To complete this tutorial, you need an Azure account. You can activate your MSDN subscriber benefits or sign up for a free trial. To create a virtual machine 1. Log in to the Azure classic portal. 2. Click New, click Compute, click Virtual machine, and then click From Gallery. 3. In the Virtual machine image select dialog box, select JDK 7 Windows Server 2012. Note that JDK 6 Windows Server 2012 is available in case you have legacy applications that are not yet ready to run in JDK 7. 4. Click Next. 5. In the Virtual machine configuration dialog box: a. Specify a name for the virtual machine. b. Specify the size to use for the virtual machine. c. Enter a name for the administrator in the User Name field. Remember this name and the password you will enter next, you will use them when you remotely log in to the virtual machine. d. Enter a password in the New password field, and re-enter it in the Confirm field. This is the Administrator account password. e. Click Next. 6. In the next Virtual machine configuration dialog box: a. For Cloud service, use the default Create a new cloud service. b. The value for Cloud service DNS name must be unique across cloudapp.net. If needed, modify this value so that Azure indicates it is unique. c. Specify a region, affinity group, or virtual network. For purposes of this tutorial, specify a region such as West US. d. For Storage Account, select Use an automatically generated storage account. e. For Availability Set, select (None). f. Click Next. 7. In the final Virtual machine configuration dialog box: a. Accept the default endpoint entries. b. Click Complete. To remotely log in to your virtual machine 1. 2. 3. 4. 5. Log on to the Azure classic portal. Click Virtual machines. Click the name of the virtual machine that you want to log in to. Click Connect. Respond to the prompts as needed to connect to the virtual machine. When prompted for the administrator name and password, use the values that you provided when you created the virtual machine. Note that the Azure Service Bus functionality requires the Baltimore CyberTrust Root certificate to be installed as part of your JRE's cacerts store. This certificate is automatically included in the Java Runtime Environment (JRE) used by this tutorial. If you do not have this certificate in your JRE cacerts store, see Adding a Certificate to the Java CA Certificate Store for information on adding it (as well as information on viewing the certificates in your cacerts store). How to create a service bus namespace To begin using Service Bus queues in Azure, you must first create a service namespace. A service namespace provides a scoping container for addressing Service Bus resources within your application. To create a service namespace: 1. Log on to the Azure classic portal. 2. In the lower-left navigation pane of the Azure classic portal, click Service Bus, Access Control & Caching. 3. In the upper-left pane of the Azure classic portal, click the Service Bus node, and then click the New button. 4. In the Create a new Service Namespace dialog box, enter a Namespace, and then to make sure that it is unique, click the Check Availability button. 5. After making sure the namespace name is available, choose the country or region in which your namespace should be hosted, and then click the Create Namespace button. The namespace you created will then appear in the Azure classic portal and takes a moment to activate. Wait until the status is Active before continuing with the next step. Obtain the Default Management Credentials for the namespace In order to perform management operations, such as creating a queue, on the new namespace, you need to obtain the management credentials for the namespace. 1. In the left navigation pane, click the Service Bus node to display the list of available namespaces. 2. Select the namespace you just created from the list shown. 3. The right-hand Properties pane lists the properties for the new namespace. 4. The Default Key is hidden. Click the View button to display the security credentials. 5. Make a note of the Default Issuer and the Default Key as you will use this information below to perform operations with the namespace. How to create a Java application that performs a compute-intensive task 1. On your development machine (which does not have to be the virtual machine that you created), download the Azure SDK for Java. 2. Create a Java console application using the example code at the end of this section. In this tutorial, we'll use TSPSolver.java as the Java file name. Modify the your_service_bus_namespace, your_service_bus_owner, and your_service_bus_key placeholders to use your service bus namespace, Default Issuer and Default Key values, respectively. 3. After coding, export the application to a runnable Java archive (JAR), and package the required libraries into the generated JAR. In this tutorial, we'll use TSPSolver.jar as the generated JAR name. // TSPSolver.java import import import import import import import import import import com.microsoft.windowsazure.services.core.Configuration; com.microsoft.windowsazure.services.core.ServiceException; com.microsoft.windowsazure.services.serviceBus.*; com.microsoft.windowsazure.services.serviceBus.models.*; java.io.*; java.text.DateFormat; java.text.SimpleDateFormat; java.util.ArrayList; java.util.Date; java.util.List; public class TSPSolver { // Value specifying how often to provide an update to the console. private static long loopCheck = 100000000; private static long nTimes = 0, nLoops=0; private private private private private static static static static static double[][] distances; String[] cityNames; int[] bestOrder; double minDistance; ServiceBusContract service; private static void buildDistances(String fileLocation, int numCities) throws Exception{ try{ BufferedReader file = new BufferedReader(new InputStreamReader(new DataInputStream(new FileInputStream(new File(fileLocation))))); double[][] cityLocs = new double[numCities][2]; for (int i = 0; i<numCities; i++){ String[] line = file.readLine().split(", "); cityNames[i] = line[0]; cityLocs[i][0] = Double.parseDouble(line[1]); cityLocs[i][1] = Double.parseDouble(line[2]); } } for (int i = 0; i<numCities; i++){ for (int j = i; j<numCities; j++){ distances[i][j] = Math.hypot(Math.abs(cityLocs[i][0] - cityLocs[j][0]), Math.abs(cityLocs[i][1] - cityLocs[j][1])); distances[j][i] = distances[i][j]; } } } catch (Exception e){ throw e; } } private static void permutation(List<Integer> startCities, double distSoFar, List<Integer> restCities) throws Exception { try { nTimes++; if (nTimes == loopCheck) { nLoops++; nTimes = 0; DateFormat dateFormat = new SimpleDateFormat("MM/dd/yyyy HH:mm:ss"); Date date = new Date(); System.out.print("Current time is " + dateFormat.format(date) + ". "); System.out.println( "Completed " + nLoops + " iterations of size of " + loopCheck + "."); } if ((restCities.size() == 1) && ((minDistance == -1) || (distSoFar + distances[restCities.get(0)] [startCities.get(0)] + distances[restCities.get(0)][startCities.get(startCities.size()-1)] < minDistance))){ startCities.add(restCities.get(0)); newBestDistance(startCities, distSoFar + distances[restCities.get(0)][startCities.get(0)] + distances[restCities.get(0)][startCities.get(startCities.size()-2)]); startCities.remove(startCities.size()-1); } else{ for (int i=0; i<restCities.size(); i++){ startCities.add(restCities.get(0)); restCities.remove(0); permutation(startCities, distSoFar + distances[startCities.get(startCities.size()-1)] [startCities.get(startCities.size()-2)],restCities); restCities.add(startCities.get(startCities.size()-1)); startCities.remove(startCities.size()-1); } } } catch (Exception e) { throw e; } } private static void newBestDistance(List<Integer> cities, double distance) throws ServiceException, Exception { try { minDistance = distance; String cityList = "Shortest distance is "+minDistance+", with route: "; for (int i = 0; i<bestOrder.length; i++){ bestOrder[i] = cities.get(i); cityList += cityNames[bestOrder[i]]; if (i != bestOrder.length -1) cityList += ", "; } System.out.println(cityList); service.sendQueueMessage("TSPQueue", new BrokeredMessage(cityList)); } catch (ServiceException se) { { throw se; } catch (Exception e) { throw e; } } public static void main(String args[]){ try { Configuration config = ServiceBusConfiguration.configureWithWrapAuthentication( "your_service_bus_namespace", "your_service_bus_owner", "your_service_bus_key", ".servicebus.windows.net", "-sb.accesscontrol.windows.net/WRAPv0.9"); service = ServiceBusService.create(config); int numCities = 10; // Use as the default, if no value is specified at command line. if (args.length != 0) { if (args[0].toLowerCase().compareTo("createqueue")==0) { // No processing to occur other than creating the queue. QueueInfo queueInfo = new QueueInfo("TSPQueue"); service.createQueue(queueInfo); System.out.println("Queue named TSPQueue was created."); System.exit(0); } if (args[0].toLowerCase().compareTo("deletequeue")==0) { // No processing to occur other than deleting the queue. service.deleteQueue("TSPQueue"); System.out.println("Queue named TSPQueue was deleted."); System.exit(0); } // Neither creating or deleting a queue. // Assume the value passed in is the number of cities to solve. numCities = Integer.valueOf(args[0]); } System.out.println("Running for " + numCities + " cities."); List<Integer> startCities = new ArrayList<Integer>(); List<Integer> restCities = new ArrayList<Integer>(); startCities.add(0); for(int i = 1; i<numCities; i++) restCities.add(i); distances = new double[numCities][numCities]; cityNames = new String[numCities]; buildDistances("c:\\TSP\\cities.txt", numCities); minDistance = -1; bestOrder = new int[numCities]; permutation(startCities, 0, restCities); System.out.println("Final solution found!"); service.sendQueueMessage("TSPQueue", new BrokeredMessage("Complete")); } catch (ServiceException se) { System.out.println(se.getMessage()); System.out.println(se.getMessage()); se.printStackTrace(); System.exit(-1); } catch (Exception e) { System.out.println(e.getMessage()); e.printStackTrace(); System.exit(-1); } } } How to create a Java application that monitors the progress of the compute-intensive task 1. On your development machine, create a Java console application using the example code at the end of this section. In this tutorial, we'll use TSPClient.java as the Java file name. As shown earlier, modify the your_service_bus_namespace, your_service_bus_owner, and your_service_bus_key placeholders to use your service bus namespace, Default Issuer and Default Key values, respectively. 2. Export the application to a runnable JAR, and package the required libraries into the generated JAR. In this tutorial, we'll use TSPClient.jar as the generated JAR name. // TSPClient.java import import import import import import java.util.Date; java.text.DateFormat; java.text.SimpleDateFormat; com.microsoft.windowsazure.services.serviceBus.*; com.microsoft.windowsazure.services.serviceBus.models.*; com.microsoft.windowsazure.services.core.*; public class TSPClient { public static void main(String[] args) { try { DateFormat dateFormat = new SimpleDateFormat("MM/dd/yyyy HH:mm:ss"); Date date = new Date(); System.out.println("Starting at " + dateFormat.format(date) + "."); String namespace = "your_service_bus_namespace"; String issuer = "your_service_bus_owner"; String key = "your_service_bus_key"; Configuration config; config = ServiceBusConfiguration.configureWithWrapAuthentication( namespace, issuer, key, ".servicebus.windows.net", "-sb.accesscontrol.windows.net/WRAPv0.9"); ServiceBusContract service = ServiceBusService.create(config); BrokeredMessage message; int waitMinutes = 3; // Use as the default, if no value is specified at command line. if (args.length != 0) { waitMinutes = Integer.valueOf(args[0]); } } String waitString; waitString = (waitMinutes == 1) ? "minute." : waitMinutes + " minutes."; // This queue must have previously been created. service.getQueue("TSPQueue"); int numRead; String s = null; while (true) { ReceiveQueueMessageResult resultQM = service.receiveQueueMessage("TSPQueue"); message = resultQM.getValue(); if (null != message && null != message.getMessageId()) { // Display the queue message. byte[] b = new byte[200]; System.out.print("From queue: "); s = null; numRead = message.getBody().read(b); while (-1 != numRead) { s = new String(b); s = s.trim(); System.out.print(s); numRead = message.getBody().read(b); } System.out.println(); if (s.compareTo("Complete") == 0) { // No more processing to occur. date = new Date(); System.out.println("Finished at " + dateFormat.format(date) + "."); break; } } else { // The queue is empty. System.out.println("Queue is empty. Sleeping for another " + waitString); Thread.sleep(60000 * waitMinutes); } } } catch (ServiceException se) { System.out.println(se.getMessage()); se.printStackTrace(); System.exit(-1); } catch (Exception e) { System.out.println(e.getMessage()); e.printStackTrace(); System.exit(-1); } } } } How to run the Java applications Run the compute-intensive application, first to create the queue, then to solve the Traveling Saleseman Problem, which will add the current best route to the service bus queue. While the compute-intensive application is running (or afterwards), run the client to display results from the service bus queue. To run the compute -intensive application 1. Log on to your virtual machine. 2. Create a folder where you will run your application. For example, c:\TSP. 3. Copy TSPSolver.jar to c:\TSP, 4. Create a file named c:\TSP\cities.txt with the following contents. City_1, 1002.81, -1841.35 City_2, -953.55, -229.6 City_3, -1363.11, -1027.72 City_4, -1884.47, -1616.16 City_5, 1603.08, -1030.03 City_6, -1555.58, 218.58 City_7, 578.8, -12.87 City_8, 1350.76, 77.79 City_9, 293.36, -1820.01 City_10, 1883.14, 1637.28 City_11, -1271.41, -1670.5 City_12, 1475.99, 225.35 City_13, 1250.78, 379.98 City_14, 1305.77, 569.75 City_15, 230.77, 231.58 City_16, -822.63, -544.68 City_17, -817.54, -81.92 City_18, 303.99, -1823.43 City_19, 239.95, 1007.91 City_20, -1302.92, 150.39 City_21, -116.11, 1933.01 City_22, 382.64, 835.09 City_23, -580.28, 1040.04 City_24, 205.55, -264.23 City_25, -238.81, -576.48 City_26, -1722.9, -909.65 City_27, 445.22, 1427.28 City_28, 513.17, 1828.72 City_29, 1750.68, -1668.1 City_30, 1705.09, -309.35 City_31, -167.34, 1003.76 City_32, -1162.85, -1674.33 City_33, 1490.32, 821.04 City_34, 1208.32, 1523.3 City_35, 18.04, 1857.11 City_36, 1852.46, 1647.75 City_37, -167.44, -336.39 City_38, 115.4, 0.2 City_39, -66.96, 917.73 City_40, 915.96, 474.1 City_41, 140.03, 725.22 City_42, -1582.68, 1608.88 City_43, -567.51, 1253.83 City_44, 1956.36, 830.92 City_45, -233.38, 909.93 City_46, -1750.45, 1940.76 City_47, 405.81, 421.84 City_48, 363.68, 768.21 City_49, -120.3, -463.13 City_50, 588.51, 679.33 5. At a command prompt, change directories to c:\TSP. 6. Ensure the JRE's bin folder is in the PATH environment variable. 7. You'll need to create the service bus queue before you run the TSP solver permutations. Run the following command to create the service bus queue. java -jar TSPSolver.jar createqueue 8. Now that the queue is created, you can run the TSP solver permutations. For example, run the following command to run the solver for 8 cities. java -jar TSPSolver.jar 8 If you don't specify a number, it will run for 10 cities. As the solver finds current shortest routes, it will add them to the queue. NOTE The larger the number that you specify, the longer the solver will run. For example, running for 14 cities could take several minutes, and running for 15 cities could take several hours. Increasing to 16 or more cities could result in days of runtime (eventually weeks, months, and years). This is due to the rapid increase in the number of permutations evaluated by the solver as the number of cities increases. How to run the monitoring client application 1. Log on to your machine where you will run the client application. This does not need to be the same machine running the TSPSolver application, although it can be. 2. Create a folder where you will run your application. For example, c:\TSP. 3. Copy TSPClient.jar to c:\TSP, 4. Ensure the JRE's bin folder is in the PATH environment variable. 5. At a command prompt, change directories to c:\TSP. 6. Run the following command. java -jar TSPClient.jar Optionally, specify the number of minutes to sleep in between checking the queue, by passing in a command-line argument. The default sleep period for checking the queue is 3 minutes, which is used if no command-line argument is passed to TSPClient. If you want to use a different value for the sleep interval, for example, one minute, run the following command. java -jar TSPClient.jar 1 The client will run until it sees a queue message of "Complete". Note that if you run multiple occurrences of the solver without running the client, you may need to run the client multiple times to completely empty the queue. Alternatively, you can delete the queue and then create it again. To delete the queue, run the following TSPSolver (not TSPClient) command. java -jar TSPSolver.jar deletequeue The solver will run until it finishes examining all routes. How to stop the Java applications For both the solver and client applications, you can press Ctrl+C to exit if you want to end prior to normal completion. Django Hello World web application on a Windows Server VM 4/3/2017 • 3 min to read • Edit Online IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. For a Resource Manager template to deploy Django, see here. This tutorial describes how to host a Django-based website on Microsoft Azure using a Windows Server virtual machine. This tutorial assumes you have no prior experience using Azure. After completing this tutorial, you will have a Django-based application up and running in the cloud. You will learn how to: Set up an Azure virtual machine to host Django. While this tutorial explains how to accomplish this under Windows Server, the same could also be done with a Linux VM hosted in Azure. Create a new Django application from Windows. By following this tutorial, you will build a simple Hello World web application. The application will be hosted in an Azure virtual machine. A screenshot of the completed application appears next. NOTE To complete this tutorial, you need an Azure account. You can activate your MSDN subscriber benefits or sign up for a free trial. Creating and configuring an Azure virtual machine to host Django 1. Follow the instructions given here to create an Azure virtual machine of the Windows Server 2012 R2 Datacenter distribution. 2. Instruct Azure to direct port 80 traffic from the web to port 80 on the virtual machine: Navigate to your newly created virtual machine in the Azure classic portal and click the ENDPOINTS tab. Click the ADD button at the bottom of the screen. Open up the TCP protocol's PUBLIC PORT 80 as PRIVATE PORT 80. 3. From the DASHBOARD tab, click CONNECT to use Remote Desktop to remotely log into the newly created Azure virtual machine. Important Note: All instructions below assume you logged into the virtual machine correctly and are issuing commands there rather than your local machine. Installing Python, Django, WFastCGI Note: In order to download using Internet Explorer you may have to configure IE ESC settings (Start/Administrative Tools/Server Manager/Local Server, then click IE Enhanced Security Configuration, set to Off). 1. Install the latest Python 2.7 or 3.4 from python.org. 2. Install the wfastcgi and django packages using pip. For Python 2.7, use the following command. c:\python27\scripts\pip install wfastcgi c:\python27\scripts\pip install django For Python 3.4, use the following command. c:\python34\scripts\pip install wfastcgi c:\python34\scripts\pip install django Installing IIS with FastCGI 1. Install IIS with FastCGI support. This may take several minutes to execute. start /wait %windir%\System32\PkgMgr.exe /iu:IIS-WebServerRole;IIS-WebServer;IISCommonHttpFeatures;IIS-StaticContent;IIS-DefaultDocument;IIS-DirectoryBrowsing;IIS-HttpErrors;IISHealthAndDiagnostics;IIS-HttpLogging;IIS-LoggingLibraries;IIS-RequestMonitor;IIS-Security;IISRequestFiltering;IIS-HttpCompressionStatic;IIS-WebServerManagementTools;IIS-ManagementConsole;WASWindowsActivationService;WAS-ProcessModel;WAS-NetFxEnvironment;WAS-ConfigurationAPI;IIS-CGI Creating a new Django application 1. From C:\inetpub\wwwroot, enter the following command to create a new Django project: For Python 2.7, use the following command. C:\Python27\Scripts\django-admin.exe startproject helloworld For Python 3.4, use the following command. C:\Python34\Scripts\django-admin.exe startproject helloworld 2. The django-admin command generates a basic structure for Django-based websites: helloworld\manage.py helps you to start hosting and stop hosting your Django-based website helloworld\helloworld\settings.py contains Django settings for your application. helloworld\helloworld\urls.py contains the mapping code between each url and its view. 3. Create a new file named views.py in the C:\inetpub\wwwroot\helloworld\helloworld directory. This will contain the view that renders the "hello world" page. Start your editor and enter the following: from django.http import HttpResponse def home(request): html = "<html><body>Hello World!</body></html>" return HttpResponse(html) 4. Replace the contents of the urls.py file with the following. from django.conf.urls import patterns, url urlpatterns = patterns('', url(r'^$', 'helloworld.views.home', name='home'), ) Configuring IIS 1. Unlock the handlers section in the global applicationhost.config. This will enable the use of the python handler in your web.config. %windir%\system32\inetsrv\appcmd unlock config -section:system.webServer/handlers 2. Enable WFastCGI. This will add an application to the global applicationhost.config that refers to your Python interpreter executable and the wfastcgi.py script. Python 2.7: c:\python27\scripts\wfastcgi-enable Python 3.4: c:\python34\scripts\wfastcgi-enable 3. Create a web.config file in C:\inetpub\wwwroot\helloworld. The value of the scriptProcessor attribute should match the output of the previous step. See the page for wfastcgi on pypi for more on wfastcgi settings. Python 2.7: <configuration> <appSettings> <add key="WSGI_HANDLER" value="django.core.handlers.wsgi.WSGIHandler()" /> <add key="PYTHONPATH" value="C:\inetpub\wwwroot\helloworld" /> <add key="DJANGO_SETTINGS_MODULE" value="helloworld.settings" /> </appSettings> <system.webServer> <handlers> <add name="Python FastCGI" path="*" verb="*" modules="FastCgiModule" scriptProcessor="C:\Python27\python.exe|C:\Python27\Lib\site-packages\wfastcgi.pyc" resourceType="Unspecified" /> </handlers> </system.webServer> </configuration> Python 3.4: <configuration> <appSettings> <add key="WSGI_HANDLER" value="django.core.handlers.wsgi.WSGIHandler()" /> <add key="PYTHONPATH" value="C:\inetpub\wwwroot\helloworld" /> <add key="DJANGO_SETTINGS_MODULE" value="helloworld.settings" /> </appSettings> <system.webServer> <handlers> <add name="Python FastCGI" path="*" verb="*" modules="FastCgiModule" scriptProcessor="C:\Python34\python.exe|C:\Python34\Lib\site-packages\wfastcgi.py" resourceType="Unspecified" /> </handlers> </system.webServer> </configuration> 4. Update the location of the IIS Default Web Site to point to the django project folder. %windir%\system32\inetsrv\appcmd set vdir "Default Web Site/" physicalPath:"C:\inetpub\wwwroot\helloworld" 5. Finally, load the web page in your browser. Shutting down your Azure virtual machine When you're done with this tutorial, shut down and/or remove your newly created Azure virtual machine to free up resources for other tutorials and avoid incurring Azure usage charges. 1 min to read • Edit O nline How to install and configure Symantec Endpoint Protection on a Windows VM 3/24/2017 • 2 min to read • Edit Online IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. This article shows you how to install and configure the Symantec Endpoint Protection client on an existing virtual machine (VM) running Windows Server. This full client includes services such as virus and spyware protection, firewall, and intrusion prevention. The client is installed as a security extension by using the VM Agent. If you have an existing subscription from Symantec for an on-premises solution, you can use it to protect your Azure virtual machines. If you're not a customer yet, you can sign up for a trial subscription. For more information about this solution, see Symantec Endpoint Protection on Microsoft's Azure platform. This page also has links to licensing information and instructions for installing the client if you're already a Symantec customer. Install Symantec Endpoint Protection on an existing VM Before you begin, you need the following: The Azure PowerShell module, version 0.8.2 or later, on your work computer. You can check the version of Azure PowerShell that you have installed with the Get-Module azure | format-table version command. For instructions and a link to the latest version, see How to Install and Configure Azure PowerShell. Log in to your Azure subscription using Add-AzureAccount . The VM Agent running on the Azure Virtual Machine. First, verify that the VM Agent is already installed on the virtual machine. Fill in the cloud service name and virtual machine name, and then run the following commands at an administrator-level Azure PowerShell command prompt. Replace everything within the quotes, including the < and > characters. TIP If you don't know the cloud service and virtual machine names, run Get-AzureVM to list the names for all virtual machines in your current subscription. $CSName = "<cloud service name>" $VMName = "<virtual machine name>" $vm = Get-AzureVM -ServiceName $CSName -Name $VMName write-host $vm.VM.ProvisionGuestAgent If the write-host command displays True, the VM Agent is installed. If it displays False, see the instructions and a link to the download in the Azure blog post VM Agent and Extensions - Part 2. If the VM Agent is installed, run these commands to install the Symantec Endpoint Protection agent. $Agent = Get-AzureVMAvailableExtension -Publisher Symantec -ExtensionName SymantecEndpointProtection Set-AzureVMExtension -Publisher Symantec –Version $Agent.Version -ExtensionName SymantecEndpointProtection \ -VM $vm | Update-AzureVM To verify that the Symantec security extension has been installed and is up-to-date: 1. Log on to the virtual machine. For instructions, see How to Log on to a Virtual Machine Running Windows Server. 2. For Windows Server 2008 R2, click Start > Symantec Endpoint Protection. For Windows Server 2012 or Windows Server 2012 R2, from the start screen, type Symantec, and then click Symantec Endpoint Protection. 3. From the Status tab of the Status-Symantec Endpoint Protection window, apply updates or restart if needed. Additional resources How to Log on to a Virtual Machine Running Windows Server Azure VM Extensions and Features How to install and configure Trend Micro Deep Security as a Service on a Windows VM 4/27/2017 • 3 min to read • Edit Online IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. This article shows you how to install and configure Trend Micro Deep Security as a Service on a new or existing virtual machine (VM) running Windows Server. Deep Security as a Service includes anti-malware protection, a firewall, an intrusion prevention system, and integrity monitoring. The client is installed as a security extension via the VM Agent. On a new virtual machine, you install the Deep Security Agent, as the VM Agent is created automatically by the Azure portal. An existing VM created using the classic portal, the Azure CLI, or PowerShell might not have a VM agent. For an existing virtual machine that doesn't have the VM Agent, you need to download and install it first. This article covers both situations. If you have a current subscription from Trend Micro for an on-premises solution, you can use it to help protect your Azure virtual machines. If you're not a customer yet, you can sign up for a trial subscription. For more information about this solution, see the Trend Micro blog post Microsoft Azure VM Agent Extension For Deep Security. Install the Deep Security Agent on a new VM The Azure portal lets you install the Trend Micro security extension when you use an image from the Marketplace to create the virtual machine. If you're creating a single virtual machine, using the portal is an easy way to add protection from Trend Micro. Using an entry from the Marketplace opens a wizard that helps you set up the virtual machine. You use the Settings blade, the third panel of the wizard, to install the Trend Micro security extension. For general instructions, see Create a virtual machine running Windows in the Azure portal. When you get to the Settings blade of the wizard, do the following steps: 1. Click Extensions, then click Add extension in the next pane. 2. Select Deep Security Agent in the New resource pane. In the Deep Security Agent pane, click Create. 3. Enter the Tenant Identifier and Tenant Activation Password for the extension. Optionally, you can enter a Security Policy Identifier. Then, click OK to add the client. Install the Deep Security Agent on an existing VM To install the agent on an existing VM, you need the following items: The Azure PowerShell module, version 0.8.2 or newer, installed on your local computer. You can check the version of Azure PowerShell that you have installed by using the Get-Module azure | format-table version command. For instructions and a link to the latest version, see How to install and configure Azure PowerShell. Log in to your Azure subscription using Add-AzureAccount . The VM Agent installed on the target virtual machine. First, verify that the VM Agent is already installed. Fill in the cloud service name and virtual machine name, and then run the following commands at an administrator-level Azure PowerShell command prompt. Replace everything within the quotes, including the < and > characters. $CSName = "<cloud service name>" $VMName = "<virtual machine name>" $vm = Get-AzureVM -ServiceName $CSName -Name $VMName write-host $vm.VM.ProvisionGuestAgent If you don't know the cloud service and virtual machine name, run Get-AzureVM to display that information for all the virtual machines in your current subscription. If the write-host command returns True, the VM Agent is installed. If it returns False, see the instructions and a link to the download in the Azure blog post VM Agent and Extensions - Part 2. If the VM Agent is installed, run these commands. $Agent = Get-AzureVMAvailableExtension TrendMicro.DeepSecurity -ExtensionName TrendMicroDSA Set-AzureVMExtension -Publisher TrendMicro.DeepSecurity –Version $Agent.Version -ExtensionName TrendMicroDSA VM $vm | Update-AzureVM Next steps It takes a few minutes for the agent to start running when it is installed. After that, you need to activate Deep Security on the virtual machine so it can be managed by a Deep Security Manager. See the following articles for additional instructions: Trend's article about this solution, Instant-On Cloud Security for Microsoft Azure A sample Windows PowerShell script to configure the virtual machine Instructions for the sample Additional resources How to log on to a virtual machine running Windows Server Azure VM Extensions and features How to configure an availability set for Windows virtual machines in the classic deployment model 3/30/2017 • 3 min to read • Edit Online IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. You can also configure availability sets in Resource Manager deployments. An availability set helps keep your virtual machines available during downtime, such as during maintenance. Placing two or more similarly configured virtual machines in an availability set creates the redundancy needed to maintain availability of the applications or services that your virtual machine runs. For details about how this works, see Manage the availability of virtual machines. It's a best practice to use both availability sets and load-balancing endpoints to help ensure that your application is always available and running efficiently. For details about load-balanced endpoints, see Load balancing for Azure infrastructure services. You can add classic virtual machines into an availability set by using one of two options: Option 1: Create a virtual machine and an availability set at the same time. Then, add new virtual machines to the set when you create those virtual machines. Option 2: Add an existing virtual machine to an availability set. NOTE In the classic model, virtual machines that you want to put in the same availability set must belong to the same cloud service. Option 1: Create a virtual machine and an availability set at the same time You can use either the Azure portal or Azure PowerShell commands to do this. To use the Azure portal: 1. If you haven't already done so, sign in to the Azure portal. 2. On the hub menu, click + New, and then click Virtual Machine. 3. Select the Marketplace virtual machine image you wish to use. You can choose to create a Linux or Windows virtual machine. 4. For the selected virtual machine, verify that the deployment model is set to Classic and then click Create 5. Enter a virtual machine name, user name and password (for Windows machines) or SSH public key (for Linux machines). 6. Choose the VM size and then click Select to continue. 7. Choose Optional Configuration > Availability set, and select the availability set you wish to add the virtual machine to. 8. Review your configuration settings. When you're done, click Create. 9. While Azure creates your virtual machine, you can track the progress under Virtual Machines in the hub menu. To use Azure PowerShell commands to create an Azure virtual machine and add it to a new or existing availability set, see Use Azure PowerShell to create and preconfigure Windows-based virtual machines Option 2: Add an existing virtual machine to an availability set In the Azure portal, you can add existing classic virtual machines to an existing availability set or create a new one for them. (Keep in mind that the virtual machines in the same availability set must belong to the same cloud service.) The steps are almost the same. With Azure PowerShell, you can add the virtual machine to an existing availability set. 1. If you have not already done so, sign in to the Azure portal. 2. On the Hub menu, click Virtual Machines (classic). 3. From the list of virtual machines, select the name of the virtual machine that you want to add to the set. 4. Choose Availability set from the virtual machine Settings. 5. Select the availability set you wish to add the virtual machine to. The virtual machine must belong to the same cloud service as the availability set. 6. Click Save. To use Azure PowerShell commands, open an administrator-level Azure PowerShell session and run the following command. For the placeholders (such as <VmCloudServiceName>), replace everything within the quotes, including the < and > characters, with the correct names. Get-AzureVM -ServiceName "<VmCloudServiceName>" -Name "<VmName>" | Set-AzureAvailabilitySet AvailabilitySetName "<AvSetName>" | Update-AzureVM NOTE The virtual machine might have to be restarted to finish adding it to the availability set. Next steps For additional articles about classic deployments, see Technical articles for Windows VMs in the classic deployment model. Resize a Windows VM created in the classic deployment model 3/30/2017 • 2 min to read • Edit Online This article shows you how to resize a Windows VM, created in the classic deployment model using Azure Powershell. When considering the ability to resize a VM, there are two concepts which control the range of sizes available to resize the virtual machine. The first concept is the region in which your VM is deployed. The list of VM sizes available in region is under the Services tab of the Azure Regions web page. The second concept is the physical hardware currently hosting your VM. The physical servers hosting VMs are grouped together in clusters of common physical hardware. The method of changing a VM size differs depending on if the desired new VM size is supported by the hardware cluster currently hosting the VM. IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. You can also resize a VM created in the Resource Manager deployment model. Add your account You must configure Azure PowerShell to work with classic Azure resources. Follow the steps below to configure Azure PowerShell to manage classic resources. 1. 2. 3. 4. At the PowerShell prompt, type Add-AzureAccount and click Enter. Type in the email address associated with your Azure subscription and click Continue. Type in the password for your account. Click Sign in. Resize in the same hardware cluster To resize a VM to a size available in the hardware cluster hosting the VM, perform the following steps. 1. Run the following PowerShell command to list the VM sizes available in the hardware cluster hosting the cloud service which contains the VM. Get-AzureService | where {$_.ServiceName -eq "<cloudServiceName>"} 2. Run the following commands to resize the VM. Get-AzureVM -ServiceName <cloudServiceName> -Name <vmName> | Set-AzureVMSize -InstanceSize <newVMSize> | Update-AzureVM Resize on a new hardware cluster To resize a VM to a size not available in the hardware cluster hosting the VM, the cloud service and all VMs in the cloud service must be recreated. Each cloud service is hosted on a single hardware cluster so all VMs in the cloud service must be a size that is supported on a hardware cluster. The following steps will describe how to resize a VM by deleting and then recreating the cloud service. 1. Run the following PowerShell command to list the VM sizes available in the region. Get-AzureLocation | where {$_.Name -eq "<locationName>"} 2. 3. 4. 5. Make note of all configuration settings for each VM in the cloud service which contains the VM to be resized. Delete all VMs in the cloud service selecting the option to retain the disks for each VM. Recreate the VM to be resized using the desired VM size. Recreate all other VMs which were in the cloud service using a VM size available in the hardware cluster now hosting the cloud service. A sample script for deleting and recreating a cloud service using a new VM size can be found here. Next steps Resize a VM created in the Resource Manager deployment model. 1 min to read • Edit O nline Manage your virtual machines by using Azure PowerShell 4/27/2017 • 2 min to read • Edit Online IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. For common PowerShell commands using the Resource Manager model, see here. Many tasks you do each day to manage your VMs can be automated by using Azure PowerShell cmdlets. This article gives you example commands for simpler tasks, and links to articles that show the commands for more complex tasks. NOTE If you haven't installed and configured Azure PowerShell yet, you can get instructions in the article How to install and configure Azure PowerShell. How to use the example commands You'll need to replace some of the text in the commands with text that's appropriate for your environment. The < and > symbols indicate text you need to replace. When you replace the text, remove the symbols but leave the quote marks in place. Get a VM This is a basic task you'll use often. Use it to get information about a VM, perform tasks on a VM, or get output to store in a variable. To get information about the VM, run this command, replacing everything in the quotes, including the < and > characters: Get-AzureVM -ServiceName "<cloud service name>" -Name "<virtual machine name>" To store the output in a $vm variable, run: $vm = Get-AzureVM -ServiceName "<cloud service name>" -Name "<virtual machine name>" Log on to a Windows-based VM Run these commands: NOTE You can get the virtual machine and cloud service name from the display of the Get-AzureVM command. $svcName = "" $vmName = "" $localPath = "" $localFile = $localPath + "\" + $vmname + ".rdp" GetAzureRemoteDesktopFile -ServiceName $svcName -Name $vmName -LocalPath $localFile -Launch Stop a VM Run this command: Stop-AzureVM -ServiceName "<cloud service name>" -Name "<virtual machine name>" IMPORTANT Use this parameter to keep the virtual IP (VIP) of the cloud service in case it's the last VM in that cloud service. If you use the StayProvisioned parameter, you'll still be billed for the VM. Start a VM Run this command: Start-AzureVM -ServiceName "<cloud service name>" -Name "<virtual machine name>" Attach a data disk This task requires a few steps. First, you use the *Add-AzureDataDisk* cmdlet to add the disk to the $vm object. Then, you use Update-AzureVM cmdlet to update the configuration of the VM. You'll also need to decide whether to attach a new disk or one that contains data. For a new disk, the command creates the .vhd file and attaches it. To attach a new disk, run this command: Add-AzureDataDisk -CreateNew -DiskSizeInGB 128 -DiskLabel "<main>" -LUN <0> -VM $vm | Update-AzureVM To attach an existing data disk, run this command: Add-AzureDataDisk -Import -DiskName "<MyExistingDisk>" -LUN <0> | Update-AzureVM To attach data disks from an existing .vhd file in blob storage, run this command: Add-AzureDataDisk -ImportFrom -MediaLocation ` "<https://mystorage.blob.core.windows.net/mycontainer/MyExistingDisk.vhd>" ` -DiskLabel "<main>" -LUN <0> | Update-AzureVM Create a Windows-based VM To create a new Windows-based virtual machine in Azure, use the instructions in Use Azure PowerShell to create and preconfigure Windows-based virtual machines. This topic steps you through the creation of an Azure PowerShell command set that creates a Windows-based VM that can be preconfigured: With Active Directory domain membership. With additional disks. As a member of an existing load-balanced set. With a static IP address. About the virtual machine agent and extensions for Windows VMs 3/30/2017 • 3 min to read • Edit Online IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. For information about VM agents and extensions using Resource Manager, see here. VM extensions can help you: Modify security and identity features, such as resetting account values and using antimalware Start, stop, or configure monitoring and diagnostics Reset or install connectivity features, such as RDP and SSH Diagnose, monitor, and manage your VMs There are many other features as well. New VM Extension features are released regularly. This article describes the Azure VM Agents for Windows and Linux, and how they support VM Extension functionality. For a listing of VM Extensions by feature category, see Azure VM Extensions and Features. Azure VM Agents for Windows and Linux The Azure Virtual Machines Agent (VM Agent) is a secured, light-weight process that installs, configures, and removes VM extensions on instances of Azure Virtual Machines. The VM Agent acts as the secure local control service for your Azure VM. The extensions that the agent loads provide specific features to increase your productivity using the instance. Two Azure VM Agents exist, one for Windows VMs and one for Linux VMs. If you want a virtual machine instance to use one or more VM extensions, the instance must have an installed VM Agent. A virtual machine image created by using the Azure portal and an image from the Marketplace automatically installs a VM Agent in the creation process. If a virtual machine instance lacks a VM Agent, you can install the VM Agent after the virtual machine instance is created. Or, you can install the agent in a custom VM image that you then upload. IMPORTANT These VM Agents are very light-weight, services that enable secured administration of virtual machine instances. There might be cases in which you do not want the VM Agent. If so, be sure to create VMs that do not have the VM Agent installed using the Azure CLI or PowerShell. Although the VM Agent can be removed physically, the behavior of VM Extensions on the instance is undefined. As a result, removing an installed VM Agent is not supported. The VM Agent is enabled in the following situations: When you create an instance of a VM by using the Azure portal and selecting an image from the Marketplace, When you create an instance of a VM by using the New-AzureVM or the New-AzureQuickVM cmdlet. You can create a VM without a VM Agent by adding the –DisableGuestAgent parameter to the AddAzureProvisioningConfig cmdlet, When you manually download and install the VM Agent on an existing VM instance, and set the ProvisionGuestAgent value to true. You can use this technique for Windows and Linux agents, by using a PowerShell command or a REST call. (If you do not set the ProvisionGuestAgent value after manually installing the VM Agent, the addition of the VM Agent is not detected properly.) The following code example shows how to do this using PowerShell where the $svc and $name arguments have already been determined: $vm = Get-AzureVM –ServiceName $svc –Name $name $vm.VM.ProvisionGuestAgent = $TRUE Update-AzureVM –Name $name –VM $vm.VM –ServiceName $svc When you create a VM image that includes an installed VM Agent. Once the image with the VM Agent exists, you can upload that image to Azure. For a Windows VM, download the Windows VM Agent .msi file and install the VM Agent. For a Linux VM, install the VM Agent from the GitHub repository located at https://github.com/Azure/WALinuxAgent. For more information on how to install the VM Agent on Linux, see the Azure Linux VM Agent User Guide. NOTE In PaaS, the VM Agent is called WindowsAzureGuestAgent, and is always available on Web and Worker Role VMs. (For more information, see Azure Role Architecture.) The VM Agent for Role VMs can now add extensions to the cloud service VMs in the same way that it does for persistent Virtual Machines. The biggest difference between VM Extensions on role VMs and persistent VMs is when the VM extensions are added. With role VMs, extensions are added first to the cloud service, then to the deployments within that cloud service. Use the Get-AzureServiceAvailableExtension cmdlet to list all available role VM extensions. Find, Add, Update, and Remove VM Extensions For details on these tasks, see Add, Find, Update, and Remove Azure VM Extensions. Manage virtual machine extensions 3/30/2017 • 4 min to read • Edit Online Describes how to find, add, modify, or remove VM Extensions with either Windows or Linux Virtual Machines on Azure. IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. For information about VM extensions using the Resource Manager model, see here. Using VM Extensions Azure VM Extensions implement behaviors or features that either help other programs work on Azure VMs (for example, the WebDeployForVSDevTest extension allows Visual Studio to Web Deploy solutions on your Azure VM) or provide the ability for you to interact with the VM to support some other behavior (for example, you can use the VM Access extensions from PowerShell, the Azure CLI, and REST clients to reset or modify remote access values on your Azure VM). IMPORTANT For a complete list of extensions by the features they support, see Azure VM Extensions and Features. Because each VM extension supports a specific feature, exactly what you can and cannot do with an extension depends on the extension. Therefore, before modifying your VM, make sure you have read the documentation for the VM Extension you want to use. Removing some VM Extensions is not supported; others have properties that can be set that change VM behavior radically. The most common tasks are: 1. 2. 3. 4. Finding Available Extensions Updating Loaded Extensions Adding Extensions Removing Extensions Find Available Extensions You can locate extension and extended information using: PowerShell Azure Cross-Platform Command Line Interface (Azure CLI) Service Management REST API Azure PowerShell Some extensions have PowerShell cmdlets that are specific to them, which may make their configuration from PowerShell easier; but the following cmdlets work for all VM extensions. You can use the following cmdlets to obtain information about available extensions: For instances of web roles or worker roles, you can use the Get-AzureServiceAvailableExtension cmdlet. For instances of Virtual Machines, you can use the Get-AzureVMAvailableExtension cmdlet. For example, the following code example shows how to list the information for the IaaSDiagnostics extension using PowerShell. PS C:\> Get-AzureVMAvailableExtension -ExtensionName IaaSDiagnostics Publisher ExtensionName Version Label Description PublicConfigurationSchema PrivateConfigurationSchema IsInternalExtension SampleConfig ReplicationCompleted Eula PrivacyUri HomepageUri IsJsonExtension DisallowMajorVersionUpgrade SupportedOS PublishedDate CompanyName : : : : : : : : : : : : : : : : : : Microsoft.Azure.Diagnostics IaaSDiagnostics 1.2 Microsoft Monitoring Agent Diagnostics Microsoft Monitoring Agent Extension False True True False Azure Command Line Interface (Azure CLI ) Some extensions have Azure CLI commands that are specific to them (the Docker VM Extension is one example), which may make their configuration easier; but the following commands work for all VM extensions. You can use the azure vm extension list command to obtain information about available extensions, and use the –-json option to display all available information about one or more extensions. If you do not use an extension name, the command returns a JSON description of all available extensions. For example, the following code example shows how to list the information for the IaaSDiagnostics extension using the Azure CLI azure vm extension list command and uses the –-json option to return complete information. $ azure vm extension list -n IaaSDiagnostics --json [ { "publisher": "Microsoft.Azure.Diagnostics", "name": "IaaSDiagnostics", "version": "1.2", "label": "Microsoft Monitoring Agent Diagnostics", "description": "Microsoft Monitoring Agent Extension", "replicationCompleted": true, "isJsonExtension": true } ] Service Management REST APIs You can use the following REST APIs to obtain information about available extensions: For instances of web roles or worker roles, you can use the List Available Extensions operation. To list the versions of available extensions, you can use List Extension Versions. For instances of Virtual Machines, you can use the List Resource Extensions operation. To list the versions of available extensions, you can use List Resource Extension Versions. Add, Update, or Disable Extensions Extensions can be added when an instance is created or they can be added to a running instance. Extensions can be updated, disabled, or removed. You can perform these actions by using Azure PowerShell cmdlets or by using the Service Management REST API operations. Parameters are required to install and set up some extensions. Public and private parameters are supported for extensions. Azure PowerShell Using Azure PowerShell cmdlets is the easiest way to add and update extensions. When you use the extension cmdlets, most of the configuration of the extension is done for you. At times, you may need to programmatically add an extension. When you need to do this, you must provide the configuration of the extension. You can use the following cmdlets to know whether an extension requires a configuration of public and private parameters: For instances of web roles or worker roles, you can use the Get-AzureServiceAvailableExtension cmdlet. For instances of Virtual Machines, you can use the Get-AzureVMAvailableExtension cmdlet. Service Management REST APIs When you retrieve a listing of available extensions by using the REST APIs, you receive information about how the extension is to be configured. The information that is returned might show parameter information represented by a public schema and private schema. Public parameter values are returned in queries about the instances. Private parameter values are not returned. You can use the following REST APIs to know whether an extension requires a configuration of public and private parameters: For instances of web roles or worker roles, the PublicConfigurationSchema and PrivateConfigurationSchema elements contain the information in the response from the List Available Extensions operation. For instances of Virtual Machines, the PublicConfigurationSchema and PrivateConfigurationSchema elements contain the information in the response from the List Resource Extensions operation. NOTE Extensions can also use configurations that are defined with JSON. When these types of extensions are used, only the SampleConfig element is used. Custom Script Extension for Windows using the classic deployment model 3/30/2017 • 2 min to read • Edit Online IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. Learn how to perform these steps using the Resource Manager model. The Custom Script Extension downloads and executes scripts on Azure virtual machines. This extension is useful for post deployment configuration, software installation, or any other configuration / management task. Scripts can be downloaded from Azure storage or GitHub, or provided to the Azure portal at extension run time. The Custom Script extension integrates with Azure Resource Manager templates, and can also be run using the Azure CLI, PowerShell, Azure portal, or the Azure Virtual Machine REST API. This document details how to use the Custom Script Extension using the Azure PowerShell module, Azure Resource Manager templates, and details troubleshooting steps on Windows systems. Prerequisites Operating System The Custom Script Extension for Windows can be run against Windows Server 2008 R2, 2012, 2012 R2, and 2016 releases. Script Location The script needs to be stored in Azure storage, or any other location accessible through a valid URL. Internet Connectivity The Custom Script Extension for Windows requires that the target virtual machine is connected to the internet. Extension schema The following JSON shows the schema for the Custom Script Extension. The extension requires a script location (Azure Storage or other location with valid URL), and a command to execute. If using Azure Storage as the script source, an Azure storage account name and account key is required. These items should be treated as sensitive data and specified in the extensions protected setting configuration. Azure VM extension protected setting data is encrypted, and only decrypted on the target virtual machine. { "name": "config-app", "type": "Microsoft.ClassicCompute/virtualMachines/extensions", "location": "[resourceGroup().location]", "apiVersion": "2015-06-01", "properties": { "publisher": "Microsoft.Compute", "extension": "CustomScriptExtension", "version": "1.8", "parameters": { "public": { "fileUris": "[myScriptLocation]" }, "private": { "commandToExecute": "[myExecutionString]" } } } } Property values NAME VALUE / EXAMPLE apiVersion 2015-06-15 publisher Microsoft.Compute extension CustomScriptExtension typeHandlerVersion 1.8 fileUris (e.g) https://raw.githubusercontent.com/Microsoft/dotnet-coresample-templates/master/dotnet-core-musicwindows/scripts/configure-music-app.ps1 commandToExecute (e.g) powershell -ExecutionPolicy Unrestricted -File configuremusic-app.ps1 Template deployment Azure VM extensions can be deployed with Azure Resource Manager templates. The JSON schema detailed in the previous section can be used in an Azure Resource Manager template to run the Custom Script Extension during an Azure Resource Manager template deployment. A sample template that includes the Custom Script Extension can be found here, GitHub. PowerShell deployment The Set-AzureVMCustomScriptExtension command can be used to add the Custom Script extension to an existing virtual machine. For more information, see Set-AzureRmVMCustomScriptExtension . # create vm object $vm = Get-AzureVM -Name 2016clas -ServiceName 2016clas1313 # set extension Set-AzureVMCustomScriptExtension -VM $vm -FileUri myFileUri -Run 'create-file.ps1' # update vm $vm | Update-AzureVM Troubleshoot and support Troubleshoot Data about the state of extension deployments can be retrieved from the Azure portal, and by using the Azure PowerShell module. To see the deployment state of extensions for a given VM, run the following command. Get-AzureVMExtension -ResourceGroupName myResourceGroup -VMName myVM -Name myExtensionName Extension execution output us logged to files found in the following directory on the target virtual machine. C:\WindowsAzure\Logs\Plugins\Microsoft.Compute.CustomScriptExtension The script itself is downloaded into the following directory on the target virtual machine. C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.*\Downloads Support If you need more help at any point in this article, you can contact the Azure experts on the MSDN Azure and Stack Overflow forums. Alternatively, you can file an Azure support incident. Go to the Azure support site and select Get support. For information about using Azure Support, read the Microsoft Azure support FAQ. Injecting custom data into an Azure virtual machine 3/30/2017 • 2 min to read • Edit Online IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. For information about using the Custom Script Extension with the Resource Manager model, see here. This topic describes how to: Inject data into an Azure virtual machine (VM) when it is being provisioned. Retrieve it for both Windows and Linux. Use special tools available on some systems to detect and handle custom data automatically. NOTE This article describes how custom data can be injected by using a VM created with the Azure Service Management API. To see how to use the Azure Resource Management API, see the example template. Injecting custom data into your Azure virtual machine This feature is currently supported only in the Azure Command-Line Interface. Here we create a custom-data.txt file that contains our data, then inject that in to the VM during provisioning. Although you may use any of the options for the azure vm create command, the following demonstrates one very basic approach: azure vm create <vmname> <vmimage> <username> <password> \ --location "West US" --ssh 22 \ --custom-data ./custom-data.txt Using custom data in the virtual machine If your Azure VM is a Windows-based VM, then the custom data file is saved to %SYSTEMDRIVE%\AzureData\CustomData.bin . Although it was base64-encoded to transfer from the local computer to the new VM, it is automatically decoded and can be opened or used immediately. NOTE If the file exists, it is overwritten. The security on the directory is set to System:Full Control and Administrators:Full Control. If your Azure VM is a Linux-based VM, then the custom data file will be located in one of the following places depending on your distro. The data may be base64-encoded, so you may need to decode the data first: /var/lib/waagent/ovf-env.xml /var/lib/waagent/CustomData /var/lib/cloud/instance/user-data.txt Cloud-init on Azure If your Azure VM is from an Ubuntu or CoreOS image, then you can use CustomData to send a cloud-config to cloud-init. Or if your custom data file is a script, then cloud-init can simply execute it. Ubuntu Cloud Images In most Azure Linux images, you would edit "/etc/waagent.conf" to configure the temporary resource disk and swap file. See Azure Linux Agent user guide for more information. However, on the Ubuntu Cloud Images, you must use cloud-init to configure the resource disk (that is, the "ephemeral" disk) and swap partition. See the following page on the Ubuntu wiki for more details: AzureSwapPartitions. Next steps: Using cloud-init For further information, see the cloud-init documentation for Ubuntu. Add Role Service Management REST API Reference Azure Command-line Interface About images for Windows virtual machines 3/27/2017 • 3 min to read • Edit Online IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. For information about finding and using images in the Resource Manager model, see here. Images are used in Azure to provide a new virtual machine with an operating system. An image might also have one or more data disks. Images are available from several sources: Azure offers images in the Marketplace. There are recent versions of Windows Server and distributions of the Linux operating system. Some images also contain applications, such as SQL Server. MSDN Benefit and MSDN Pay-as-You-Go subscribers have access to additional images. The open source community offers images through VM Depot. You also can store and use your own images in Azure, by either capturing an existing Azure virtual machine for use as an image or uploading an image. About VM images and OS images Two types of images can be used in Azure: VM image and OS image. A VM image includes an operating system and all disks attached to a virtual machine when the image is created. A VM image is the newer type of image. Before VM images were introduced, an image in Azure could have only a generalized operating system and no additional disks. A VM image that contains only a generalized operating system is basically the same as the original type of image, the OS image. You can create your own images, based on a virtual machine in Azure, or a virtual machine running elsewhere that you copy and upload. If you want to use an image to create more than one virtual machine, you need to prepare it for use as an image by generalizing it. To create a Windows Server image, run the Sysprep command on the server to generalize it before you upload the .vhd file. For details about Sysprep, see How to Use Sysprep: An Introduction and Sysprep Support for Server Roles. Back up the VM before running Sysprep. Creating a Linux image varies by distribution. Typically, you need to run a set of commands that are specific to the distribution, and run the Azure Linux Agent. Working with images You can use the Azure PowerShell module and the Azure portal to manage the images available to your Azure subscription. The Azure PowerShell module provides more command options, so you can pinpoint exactly what you want to see or do. The Azure portal provides a GUI for many of the everyday administrative tasks. Here are some examples that use the Azure PowerShell module. Get all images: Get-AzureVMImage returns a list of all the images available in your current subscription: your images and those provided by Azure or partners. The resulting list could be large. The next examples show how to get a shorter list. Get image families: Get-AzureVMImage | select ImageFamily gets a list of image families by showing strings ImageFamily property. Get all images in a specific family: Get-AzureVMImage | Where-Object {$_.ImageFamily -eq $family} Find VM Images: Get-AzureVMImage | where {(gm –InputObject $_ -Name DataDiskConfigurations) -ne $null} | Select -Property Label, ImageName This cmdlet works by filtering the DataDiskConfiguration property, which only applies to VM Images. This example also filters the output to only the label and image name. Save a generalized image: Save-AzureVMImage –ServiceName "myServiceName" –Name "MyVMtoCapture" –OSState "Generalized" –ImageName "MyVmImage" –ImageLabel "This is my generalized image" Save a specialized image: Save-AzureVMImage –ServiceName "mySvc2" –Name "MyVMToCapture2" –ImageName "myFirstVMImageSP" –OSState "Specialized" -Verbose TIP The OSState parameter is required to create a VM image, which includes the operating system disk and attached data disks. If you don’t use the parameter, the cmdlet creates an OS image. The value of the parameter indicates whether the image is generalized or specialized, based on whether the operating system disk has been prepared for reuse. Delete an image: Remove-AzureVMImage –ImageName "MyOldVmImage" Next Steps You can also create a Windows machine using the Azure portal. Sizes for Windows virtual machines in Azure 4/3/2017 • 1 min to read • Edit Online This article describes the available sizes and options for the Azure virtual machines you can use to run your Windows apps and workloads. It also provides deployment considerations to be aware of when you're planning to use these resources. This article is also available for Linux virtual machines. IMPORTANT For information about pricing of the various sizes, see Virtual Machines Pricing. To see general limits on Azure VMs, see Azure subscription and service limits, quotas, and constraints. Storage costs are calculated separately based on used pages in the storage account. For details, Azure Storage Pricing. Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure SKUs. TYPE SIZES DESCRIPTION General purpose DSv2, Dv2, DS, D, Av2, A0-7 Balanced CPU-to-memory ratio. Ideal for testing and development, small to medium databases, and low to medium traffic web servers. Compute optimized Fs, F High CPU-to-memory ratio. Good for medium traffic web servers, network appliances, batch processes, and application servers. Memory optimized GS, G, DSv2, DS High memory-to-core ratio. Great for relational database servers, medium to large caches, and in-memory analytics. Storage optimized Ls High disk throughput and IO. Ideal for Big Data, SQL, and NoSQL databases. GPU NV, NC Specialized virtual machines targeted for heavy graphic rendering and video editing. Available with single or multiple GPUs. High performance compute H, A8-11 Our fastest and most powerful CPU virtual machines with optional highthroughput network interfaces (RDMA). Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure SKUs. Learn more about the different VM sizes that are available: General purpose Compute optimized Memory optimized Storage optimized GPU optimized High performance compute 1 min to read • Edit O nline 1 min to read • Edit O nline 1 min to read • Edit O nline 1 min to read • Edit O nline Automatically grow and shrink the HPC Pack cluster resources in Azure according to the cluster workload 3/24/2017 • 11 min to read • Edit Online If you deploy Azure “burst” nodes in your HPC Pack cluster, or you create an HPC Pack cluster in Azure VMs, you may want a way to automatically grow or shrink the cluster resources such as nodes or cores according to the workload on the cluster. Scaling the cluster resources in this way allows you to use your Azure resources more efficiently and control their costs. This article shows you two ways that HPC Pack provides to autoscale compute resources: The HPC Pack cluster property AutoGrowShrink The AzureAutoGrowShrink.ps1 HPC PowerShell script NOTE Azure has two different deployment models for creating and working with resources: Resource Manager and classic. This article covers using both models, but Microsoft recommends that most new deployments use the Resource Manager model. Currently you can only automatically grow and shrink HPC Pack compute nodes that are running a Windows Server operating system. Set the AutoGrowShrink cluster property Prerequisites HPC Pack 2012 R2 Update 2 or later cluster - The cluster head node can be deployed either on-premises or in an Azure VM. See Set up a hybrid cluster with HPC Pack to get started with an on-premises head node and Azure "burst" nodes. See the HPC Pack IaaS deployment script to quickly deploy an HPC Pack cluster in Azure VMs. For a cluster with a head node in Azure (Resource Manager deployment model) - Starting in HPC Pack 2016, certificate authentication in an Azure Active Directory application is used for automatically growing and shrinking cluster VMs deployed using Azure Resource Manager. Configure a certificate as follows: 1. After cluster deployment, connect by Remote Desktop to one head node. 2. Upload the certificate (PFX format with private key) to each head node and install to Cert:\LocalMachine\My and Cert:\LocalMachine\Root. 3. Start Azure PowerShell as an administrator and run the following commands on one head node: cd $env:CCP_HOME\bin Login-AzureRmAccount If your account is in more than one Azure Active Directory tenant or Azure subscription, you can run the following command to select the correct tenant and subscription: Login-AzureRMAccount -TenantId <TenantId> -SubscriptionId <subscriptionId> Run the following command to view the currently selected tenant and subscription: Get-AzureRMContext 4. Run the following script .\ConfigARMAutoGrowShrinkCert.ps1 -DisplayName “YourHpcPackAppName” -HomePage "https://YourHpcPackAppHomePage" -IdentifierUri "https://YourHpcPackAppUri" CertificateThumbprint "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" -TenantId xxxxxxxx-xxxxx-xxxxx-xxxxxxxxxxxxxxxxx where DisplayName - Azure Active Application display name. If the application does not exist, it is created in Azure Active Directory. HomePage - The home page of the application. You can configure a dummy URL, as in the preceding example. IdentifierUri - Identifier of the application. You can configure a dummy URL, as in the preceding example. CertificateThumbprint - Thumbprint of the certificate you installed on the head node in Step 1. TenantId - Tenant ID of your Azure Active Directory. You can get the Tenant ID from the Azure Active Directory portal Properties page. For more details about ConfigARMAutoGrowShrinkCert.ps1, run Get-Help .\ConfigARMAutoGrowShrinkCert.ps1 -Detailed . For a cluster with a head node in Azure (classic deployment model) - If you use the HPC Pack IaaS deployment script to create the cluster in the classic deployment model, enable the AutoGrowShrink cluster property by setting the AutoGrowShrink option in the cluster configuration file. For details, see the documentation accompanying the script download. Alternatively, enable the AutoGrowShrink cluster property after you deploy the cluster by using HPC PowerShell commands described in the following section. To prepare for this, first complete the following steps: 1. Configure an Azure management certificate on the head node and in the Azure subscription. For a test deployment, you can use the Default Microsoft HPC Azure self-signed certificate that HPC Pack installs on the head node, and then upload that certificate to your Azure subscription. For options and steps, see the TechNet Library guidance. 2. Run regedit on the head node, go to HKLM\SOFTWARE\Micorsoft\HPC\IaasInfo, and add a string value. Set the Value name to “ThumbPrint”, and Value data to the thumbprint of the certificate in Step 1. HPC PowerShell commands to set the AutoGrowShrink property Following are sample HPC PowerShell commands to set AutoGrowShrink and to tune its behavior with additional parameters. See AutoGrowShrink parameters later in this article for the complete list of settings. To run these commands, start HPC PowerShell on the cluster head node as an administrator. To enable the AutoGrowShrink property Set-HpcClusterProperty –EnableGrowShrink 1 To disable the AutoGrowShrink property Set-HpcClusterProperty –EnableGrowShrink 0 To change the grow interval in minutes Set-HpcClusterProperty –GrowInterval <interval> To change the shrink interval in minutes Set-HpcClusterProperty –ShrinkInterval <interval> To view the current configuration of AutoGrowShrink Get-HpcClusterProperty –AutoGrowShrink To exclude node groups from AutoGrowShrink Set-HpcClusterProperty –ExcludeNodeGroups <group1,group2,group3> NOTE This parameter is supported starting in HPC Pack 2016 AutoGrowShrink parameters The following are AutoGrowShrink parameters that you can modify by using the Set-HpcClusterProperty command. EnableGrowShrink - Switch to enable or disable the AutoGrowShrink property. ParamSweepTasksPerCore - Number of parametric sweep tasks to grow one core. The default is to grow one core per task. NOTE HPC Pack QFE KB3134307 changes ParamSweepTasksPerCore to TasksPerResourceUnit. It is based on the job resource type and can be node, socket, or core. GrowThreshold - Threshold of queued tasks to trigger automatic growth. The default is 1, which means that if there are 1 or more tasks in the queued state, automatically grow nodes. GrowInterval - Interval in minutes to trigger automatic growth. The default interval is 5 minutes. ShrinkInterval - Interval in minutes to trigger automatic shrinking. The default interval is 5 minutes.| ShrinkIdleTimes - Number of continuous checks to shrink to indicate the nodes are idle. The default is 3 times. For example, if the ShrinkInterval is 5 minutes, HPC Pack checks whether the node is idle every 5 minutes. If the nodes are in the idle state after 3 continuous checks (15 minutes), then HPC Pack shrinks that node. ExtraNodesGrowRatio - Additional percentage of nodes to grow for Message Passing Interface (MPI) jobs. The default value is 1, which means that HPC Pack grows nodes 1% for MPI jobs. GrowByMin - Switch to indicate whether the autogrow policy is based on the minimum resources required for the job. The default is false, which means that HPC Pack grows nodes for jobs based on the maximum resources required for the jobs. SoaJobGrowThreshold - Threshold of incoming SOA requests to trigger the automatic grow process. The default value is 50000. NOTE This parameter is supported starting in HPC Pack 2012 R2 Update 3. SoaRequestsPerCore -Number of incoming SOA requests to grow one core. The default value is 20000. NOTE This parameter is supported starting in HPC Pack 2012 R2 Update 3. ExcludeNodeGroups – Nodes in the specified node groups do not automatically grow and shrink. NOTE This parameter is supported starting in HPC Pack 2016. MPI example By default HPC Pack grows 1% extra nodes for MPI jobs (ExtraNodesGrowRatio is set to 1). The reason is that MPI may require multiple nodes, and the job can only run when all nodes are ready. When Azure starts nodes, occasionally one node might need more time to start than others, causing other nodes to be idle while waiting for that node to get ready. By growing extra nodes, HPC Pack reduces this resource waiting time, and potentially saves costs. To increase the percentage of extra nodes for MPI jobs (for example, to 10%), run a command similar to Set-HpcClusterProperty -ExtraNodesGrowRatio 10 SOA example By default, SoaJobGrowThreshold is set to 50000 and SoaRequestsPerCore is set to 200000. If you submit one SOA job with 70000 requests, there is one queued task and incoming requests are 70000. In this case HPC Pack grows 1 core for the queued task, and for incoming requests, grows (70000 - 50000)/20000 = 1 core, so in total grows 2 cores for this SOA job. Run the AzureAutoGrowShrink.ps1 script Prerequisites HPC Pack 2012 R2 Update 1 or later cluster - The AzureAutoGrowShrink.ps1 script is installed in the %CCP_HOME%bin folder. The cluster head node can be deployed either on-premises or in an Azure VM. See Set up a hybrid cluster with HPC Pack to get started with an on-premises head node and Azure "burst" nodes. See the HPC Pack IaaS deployment script to quickly deploy an HPC Pack cluster in Azure VMs, or use an Azure quickstart template. Azure PowerShell 1.4.0 - The script currently depends on this specific version of Azure PowerShell. For a cluster with Azure burst nodes - Run the script on a client computer where HPC Pack is installed, or on the head node. If running on a client computer, ensure that you set the variable $env:CCP_SCHEDULER to point to the head node. The Azure “burst” nodes must be added to the cluster, but they may be in the Not-Deployed state. For a cluster deployed in Azure VMs (Resource Manager deployment model) - For a cluster of Azure VMs deployed in the Resource Manager deployment model, the script supports two methods for Azure authentication: sign in to your Azure account to run the script every time (by running Login-AzureRmAccount , or configure a service principal to authenticate with a certificate. HPC Pack provides the script ConfigARMAutoGrowShrinkCert.ps to create a service principal with certificate. The script creates an Azure Active Directory (Azure AD) application and a service principal, and assigns the Contributor role to the service principal. To run the script, start Azure PowerShell as administrator and run the following commands: cd $env:CCP_HOME\bin Login-AzureRmAccount .\ConfigARMAutoGrowShrinkCert.ps1 -DisplayName “YourHpcPackAppName” -HomePage "https://YourHpcPackAppHomePage" -IdentifierUri "https://YourHpcPackAppUri" -PfxFile "d:\yourcertificate.pfx" For more details about ConfigARMAutoGrowShrinkCert.ps1, run Get-Help .\ConfigARMAutoGrowShrinkCert.ps1 -Detailed , For a cluster deployed in Azure VMs (classic deployment model) - Run the script on the head node VM, because it depends on the Start-HpcIaaSNode.ps1 and Stop-HpcIaaSNode.ps1 scripts that are installed there. Those scripts additionally require an Azure management certificate or publish settings file (see Manage compute nodes in an HPC Pack cluster in Azure). Make sure all the compute node VMs you need are already added to the cluster. They may be in the Stopped state. Syntax AzureAutoGrowShrink.ps1 [-NodeTemplates <String[]>] [-JobTemplates <String[]>] [-NodeType <String>] -NumOfActiveQueuedTasksPerNodeToGrow <Single> [-NumOfActiveQueuedTasksToGrowThreshold <Int32>] [-NumOfInitialNodesToGrow <Int32>] [-GrowCheckIntervalMins <Int32>] [-ShrinkCheckIntervalMins <Int32>] [-ShrinkCheckIdleTimes <Int32>] [-ExtraNodesGrowRatio <Int32>] [-ArgFile <String>] [-LogFilePrefix <String>] [<CommonParameters>] AzureAutoGrowShrink.ps1 [-NodeTemplates <String[]>] [-JobTemplates <String[]>] [-NodeType <String>] -NumOfQueuedJobsPerNodeToGrow <Single> [-NumOfQueuedJobsToGrowThreshold <Int32>] [-NumOfInitialNodesToGrow <Int32>] [-GrowCheckIntervalMins <Int32>] [-ShrinkCheckIntervalMins <Int32>] [-ShrinkCheckIdleTimes <Int32>] [-ExtraNodesGrowRatio <Int32>] [-ArgFile <String>] [-LogFilePrefix <String>] [<CommonParameters>] AzureAutoGrowShrink.ps1 -UseLastConfigurations [-ArgFile <String>] [-LogFilePrefix <String>] [<CommonParameters>] Parameters NodeTemplates - Names of the node templates to define the scope for the nodes to grow and shrink. If not specified (the default value is @()), all nodes in the AzureNodes node group are in scope when NodeType has a value of AzureNodes, and all nodes in the ComputeNodes node group are in scope when NodeType has a value of ComputeNodes. JobTemplates - Names of the job templates to define the scope for the nodes to grow. NodeType - The type of node to grow and shrink. Supported values are: AzureNodes – for Azure PaaS (burst) nodes in an on-premises or Azure IaaS cluster. ComputeNodes - for compute node VMs only in an Azure IaaS cluster. NumOfQueuedJobsPerNodeToGrow - Number of queued jobs required to grow one node. NumOfQueuedJobsToGrowThreshold - The threshold number of queued jobs to start the grow process. NumOfActiveQueuedTasksPerNodeToGrow - The number of active queued tasks required to grow one node. If NumOfQueuedJobsPerNodeToGrow is specified with a value greater than 0, this parameter is ignored. NumOfActiveQueuedTasksToGrowThreshold - The threshold number of active queued tasks to start the grow process. NumOfInitialNodesToGrow - The initial minimum number of nodes to grow if all the nodes in scope are NotDeployed or Stopped (Deallocated). GrowCheckIntervalMins - The interval in minutes between checks to grow. ShrinkCheckIntervalMins - The interval in minutes between checks to shrink. ShrinkCheckIdleTimes - The number of continuous shrink checks (separated by ShrinkCheckIntervalMins) to indicate the nodes are idle. UseLastConfigurations - The previous configurations saved in the argument file. ArgFile- The name of the argument file used to save and update the configurations to run the script. LogFilePrefix - The prefix name of the log file. You can specify a path. By default the log is written to the current working directory. Example 1 The following example configures the Azure burst nodes deployed with the Default AzureNode Template to grow and shrink automatically. If all the nodes are initially in the Not-Deployed state, at least 3 nodes are started. If the number of queued jobs exceeds 8, the script starts nodes until their number exceeds the ratio of queued jobs to NumOfQueuedJobsPerNodeToGrow. If a node is found to be idle in 3 consecutive idle times, it is stopped. .\AzureAutoGrowShrink.ps1 -NodeTemplates @('Default AzureNode Template') -NodeType AzureNodes -NumOfQueuedJobsPerNodeToGrow 5 -NumOfQueuedJobsToGrowThreshold 8 -NumOfInitialNodesToGrow 3 -GrowCheckIntervalMins 1 -ShrinkCheckIntervalMins 1 -ShrinkCheckIdleTimes 3 Example 2 The following example configures the Azure compute node VMs deployed with the Default ComputeNode Template to grow and shrink automatically. The jobs configured by the Default job template define the scope of the workload on the cluster. If all the nodes are initially stopped, at least 5 nodes are started. If the number of active queued tasks exceeds 15, the script starts nodes until their number exceeds the ratio of active queued tasks to NumOfActiveQueuedTasksPerNodeToGrow. If a node is found to be idle in 10 consecutive idle times, it is stopped. .\AzureAutoGrowShrink.ps1 -NodeTemplates 'Default ComputeNode Template' -JobTemplates 'Default' -NodeType ComputeNodes -NumOfActiveQueuedTasksPerNodeToGrow 10 -NumOfActiveQueuedTasksToGrowThreshold 15 NumOfInitialNodesToGrow 5 -GrowCheckIntervalMins 1 -ShrinkCheckIntervalMins 1 -ShrinkCheckIdleTimes 10 -ArgFile 'IaaSVMComputeNodes_Arg.xml' -LogFilePrefix 'IaaSVMComputeNodes_log' Manage the number and availability of compute nodes in an HPC Pack cluster in Azure 3/27/2017 • 4 min to read • Edit Online If you created an HPC Pack 2012 R2 cluster in Azure VMs, you might want ways to easily add, remove, start (provision), or stop (deprovision) some compute node VMs in the cluster. To do these tasks, run Azure PowerShell scripts that are installed on the head node VM. These scripts help you control the number and availability of your HPC Pack cluster resources so you can control costs. IMPORTANT This article applies only to HPC Pack 2012 R2 clusters in Azure created using the classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. In addition, the PowerShell scripts described in this article are not available in HPC Pack 2016. Prerequisites HPC Pack 2012 R2 cluster in Azure VMs: Create an HPC Pack 2012 R2 cluster in the classic deployment model. For example, you can automate the deployment by using the HPC Pack 2012 R2 VM image in the Azure Marketplace and an Azure PowerShell script. For information and prerequisites, see Create an HPC Cluster with the HPC Pack IaaS deployment script. After deployment, find the node management scripts in the %CCP_HOME%bin folder on the head node. Run each of the scripts as an administrator. Azure publish settings file or management certificate: You need to do one of the following on the head node: Import the Azure publish settings file. To do this, run the following Azure PowerShell cmdlets on the head node: Get-AzurePublishSettingsFile Import-AzurePublishSettingsFile –PublishSettingsFile <publish settings file> Configure the Azure management certificate on the head node. If you have the .cer file, import it in the CurrentUser\My certificate store and then run the following Azure PowerShell cmdlet for your Azure environment (either AzureCloud or AzureChinaCloud): Set-AzureSubscription -SubscriptionName <Sub Name> -SubscriptionId <Sub ID> -Certificate (GetItem Cert:\CurrentUser\My\<Cert Thrumbprint>) -Environment <AzureCloud | AzureChinaCloud> Add compute node VMs Add compute nodes with the Add-HpcIaaSNode.ps1 script. Syntax Add-HPCIaaSNode.ps1 [-ServiceName] <String> [-ImageName] <String> [-Quantity] <Int32> [-InstanceSize] <String> [-DomainUserName] <String> [[-DomainUserPassword] <String>] [[-NodeNameSeries] <String>] [<CommonParameters>] Parameters ServiceName: Name of the cloud service that new compute node VMs are added to. ImageName: Azure VM image name, which can be obtained through the Azure classic portal or Azure PowerShell cmdlet Get-AzureVMImage. The image must meet the following requirements: 1. A Windows operating system must be installed. 2. HPC Pack must be installed in the compute node role. 3. The image must be a private image in the User category, not a public Azure VM image. Quantity: Number of compute node VMs to be added. InstanceSize: Size of the compute node VMs. DomainUserName: Domain user name, which is used to join the new VMs to the domain. DomainUserPassword: Password of the domain user. NodeNameSeries (optional): Naming pattern for the compute nodes. The format must be <Root_Name> <Start_Number>%. For example, MyCN%10% means a series of the compute node names starting from MyCN11. If not specified, the script uses the configured node naming series in the HPC cluster. Example The following example adds 20 size Large compute node VMs in the cloud service hpcservice1, based on the VM image hpccnimage1. Add-HPCIaaSNode.ps1 –ServiceName hpcservice1 –ImageName hpccniamge1 –Quantity 20 –InstanceSize Large –DomainUserName <username> -DomainUserPassword <password> Remove compute node VMs Remove compute nodes with the Remove-HpcIaaSNode.ps1 script. Syntax Remove-HPCIaaSNode.ps1 -Name <String[]> [-DeleteVHD] [-Force] [-WhatIf] [-Confirm] [<CommonParameters>] Remove-HPCIaaSNode.ps1 -Node <Object> [-DeleteVHD] [-Force] [-Confirm] [<CommonParameters>] Parameters Name: Names of cluster nodes to be removed. Wildcards are supported. The parameter set name is Name. You can't specify both the Name and Node parameters. Node: The HpcNode object for the nodes to be removed, which can be obtained through the HPC PowerShell cmdlet Get-HpcNode. The parameter set name is Node. You can't specify both the Name and Node parameters. DeleteVHD (optional): Setting to delete the associated disks for the VMs that are removed. Force (optional): Setting to force HPC nodes offline before removing them. Confirm (optional): Prompt for confirmation before executing the command. WhatIf: Setting to describe what would happen if you executed the command without actually executing the command. Example The following example forces offline the nodes with names beginning HPCNode-CN- and them removes the nodes and their associated disks. Remove-HPCIaaSNode.ps1 –Name HPCNodeCN-* –DeleteVHD -Force Start compute node VMs Start compute nodes with the Start-HpcIaaSNode.ps1 script. Syntax Start-HPCIaaSNode.ps1 -Name <String[]> [<CommonParameters>] Start-HPCIaaSNode.ps1 -Node <Object> [<CommonParameters>] Parameters Name: Names of the cluster nodes to be started. Wildcards are supported. The parameter set name is Name. You cannot specify both the Name and Node parameters. Node- The HpcNode object for the nodes to be started, which can be obtained through the HPC PowerShell cmdlet Get-HpcNode. The parameter set name is Node. You cannot specify both the Name and Node parameters. Example The following example starts nodes with names beginning HPCNode-CN-. Start-HPCIaaSNode.ps1 –Name HPCNodeCN-* Stop compute node VMs Stop compute nodes with the Stop-HpcIaaSNode.ps1 script. Syntax Stop-HPCIaaSNode.ps1 -Name <String[]> [-Force] [<CommonParameters>] Stop-HPCIaaSNode.ps1 -Node <Object> [-Force] [<CommonParameters>] Parameters Name- Names of the cluster nodes to be stopped. Wildcards are supported. The parameter set name is Name. You cannot specify both the Name and Node parameters. Node: The HpcNode object for the nodes to be stopped, which can be obtained through the HPC PowerShell cmdlet Get-HpcNode. The parameter set name is Node. You cannot specify both the Name and Node parameters. Force (optional): Setting to force HPC nodes offline before stopping them. Example The following example forces offline nodes with names beginning HPCNode-CN- and then stops the nodes. Stop-HPCIaaSNode.ps1 –Name HPCNodeCN-* -Force Next steps To automatically grow or shrink the cluster nodes according to the current workload of jobs and tasks on the cluster, see Automatically grow and shrink the HPC Pack cluster resources in Azure according to the cluster workload. Create a Windows high-performance computing (HPC) cluster with the HPC Pack IaaS deployment script 3/27/2017 • 9 min to read • Edit Online Run the HPC Pack IaaS deployment PowerShell script to deploy a complete HPC Pack 2012 R2 cluster for Windows workloads in Azure virtual machines. The cluster consists of an Active Directory-joined head node running Windows Server and Microsoft HPC Pack, and additional Windows compute resources you specify. If you want to deploy an HPC Pack cluster in Azure for Linux workloads, see Create a Linux HPC cluster with the HPC Pack IaaS deployment script. You can also use an Azure Resource Manager template to deploy an HPC Pack cluster. For examples, see Create an HPC cluster and Create an HPC cluster with a custom compute node image. IMPORTANT The PowerShell script described in this article creates a Microsoft HPC Pack 2012 R2 cluster in Azure using the classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. In addition, the script described in this article does not support HPC Pack 2016. Depending on your environment and choices, the script can create all the cluster infrastructure, including the Azure virtual network, storage accounts, cloud services, domain controller, remote or local SQL databases, head node, and additional cluster nodes. Alternatively, the script can use pre-existing Azure infrastructure and create only the HPC cluster nodes. For background information about planning an HPC Pack cluster, see the Product Evaluation and Planning and Getting Started content in the HPC Pack 2012 R2 TechNet Library. Prerequisites Azure subscription: You can use a subscription in either the Azure Global or Azure China service. Your subscription limits affect the number and type of cluster nodes you can deploy. For information, see Azure subscription and service limits, quotas, and constraints. Windows client computer with Azure PowerShell 0.8.10 or later installed and configured: See Get started with Azure PowerShell for installation instructions and steps to connect to your Azure subscription. HPC Pack IaaS deployment script: Download and unpack the latest version of the script from the Microsoft Download Center. Check the version of the script by running New-HPCIaaSCluster.ps1 –Version . This article is based on version 4.5.2 of the script. Script configuration file: Create an XML file that the script uses to configure the HPC cluster. For information and examples, see sections later in this article and the file Manual.rtf that accompanies the deployment script. Syntax New-HPCIaaSCluster.ps1 [-ConfigFile] <String> [-AdminUserName]<String> [[-AdminPassword] <String>] [[HPCImageName] <String>] [[-LogFile] <String>] [-Force] [-NoCleanOnFailure] [-PSSessionSkipCACheck] [<CommonParameters>] NOTE Run the script as an administrator. Parameters ConfigFile: Specifies the file path of the configuration file to describe the HPC cluster. See more about the configuration file in this topic, or in the file Manual.rtf in the folder containing the script. AdminUserName: Specifies the user name. If the domain forest is created by the script, this becomes the local administrator user name for all VMs and the domain administrator name. If the domain forest already exists, this specifies the domain user as the local administrator user name to install HPC Pack. AdminPassword: Specifies the administrator’s password. If not specified in the command line, the script prompts you to input the password. HPCImageName (optional): Specifies the HPC Pack VM image name used to deploy the HPC cluster. It must be a Microsoft-provided HPC Pack image from the Azure Marketplace. If not specified (recommended usually), the script chooses the latest published HPC Pack 2012 R2 image. The latest image is based on Windows Server 2012 R2 Datacenter with HPC Pack 2012 R2 Update 3 installed. NOTE Deployment fails if you don't specify a valid HPC Pack image. LogFile (optional): Specifies the deployment log file path. If not specified, the script creates a log file in the temp directory of the computer running the script. Force (optional): Suppresses all the confirmation prompts. NoCleanOnFailure (optional): Specifies that the Azure VMs that are not successfully deployed are not removed. Remove these VMs manually before rerunning the script to continue the deployment, or the deployment may fail. PSSessionSkipCACheck (optional): For every cloud service with VMs deployed by this script, a self-signed certificate is automatically generated by Azure, and all the VMs in the cloud service use this certificate as the default Windows Remote Management (WinRM) certificate. To deploy HPC features in these Azure VMs, the script by default temporarily installs these certificates in the Local Computer\Trusted Root Certification Authorities store of the client computer to suppress the “not trusted CA” security error during script execution. The certificates are removed when the script finishes. If this parameter is specified, the certificates are not installed in the client computer, and the security warning is suppressed. IMPORTANT This parameter is not recommended for production deployments. Example The following example creates an HPC Pack cluster using the configuration file MyConfigFile.xml, and specifies administrator credentials for installing the cluster. .\New-HPCIaaSCluster.ps1 –ConfigFile MyConfigFile.xml -AdminUserName <username> –AdminPassword <password> Additional considerations The script can optionally enable job submission through the HPC Pack web portal or the HPC Pack REST API. The script can optionally run custom pre- and post-configuration scripts on the head node if you want to install additional software or configure other settings. Configuration file The configuration file for the deployment script is an XML file. The schema file HPCIaaSClusterConfig.xsd is in the HPC Pack IaaS deployment script folder. IaaSClusterConfig is the root element of the configuration file, which contains the child elements described in detail in the file Manual.rtf in the deployment script folder. Example configuration files In the following examples, substitute your own values for your subscription Id or name and the account and service names. Example 1 The following configuration file deploys an HPC Pack cluster that has a head node with local databases and five compute nodes running the Windows Server 2012 R2 operating system. All the cloud services are created directly in the West US location. The head node acts as domain controller of the domain forest. <?xml version="1.0" encoding="utf-8" ?> <IaaSClusterConfig> <Subscription> <SubscriptionId>08701940-C02E-452F-B0B1-39D50119F267</SubscriptionId> <StorageAccount>mystorageaccount</StorageAccount> </Subscription> <Location>West US</Location> <VNet> <VNetName>MyVNet</VNetName> <SubnetName>Subnet-1</SubnetName> </VNet> <Domain> <DCOption>HeadNodeAsDC</DCOption> <DomainFQDN>hpc.local</DomainFQDN> </Domain> <Database> <DBOption>LocalDB</DBOption> </Database> <HeadNode> <VMName>MyHeadNode</VMName> <ServiceName>MyHPCService</ServiceName> <VMSize>ExtraLarge</VMSize> </HeadNode> <ComputeNodes> <VMNamePattern>MyHPCCN-%1000%</VMNamePattern> <ServiceName>MyHPCCNService</ServiceName> <VMSize>Medium</VMSize> <NodeCount>5</NodeCount> <OSVersion>WindowsServer2012R2</OSVersion> </ComputeNodes> </IaaSClusterConfig> Example 2 The following configuration file deploys an HPC Pack cluster in an existing domain forest. The cluster has 1 head node with local databases and 12 compute nodes with the BGInfo VM extension applied. Automatic installation of Windows updates is disabled for all the VMs in the domain forest. All the cloud services are created directly in the East Asia location. The compute nodes are created in three cloud services and three storage accounts: MyHPCCN0001 to MyHPCCN-0005 in MyHPCCNService01 and mycnstorage01; MyHPCCN-0006 to MyHPCCN0010 in MyHPCCNService02 and mycnstorage02; and MyHPCCN-0011 to MyHPCCN-0012 in MyHPCCNService03 and mycnstorage03). The compute nodes are created from an existing private image captured from a compute node. The auto grow and shrink service is enabled with default grow and shrink intervals. <?xml version="1.0" encoding="utf-8" ?> <IaaSClusterConfig> <Subscription> <SubscriptionName>Subscription-1</SubscriptionName> <StorageAccount>mystorageaccount</StorageAccount> </Subscription> <Location>East Asia</Location> <VNet> <VNetName>MyVNet</VNetName> <SubnetName>Subnet-1</SubnetName> </VNet> <Domain> <DCOption>NewDC</DCOption> <DomainFQDN>hpc.local</DomainFQDN> <DomainController> <VMName>MyDCServer</VMName> <ServiceName>MyHPCService</ServiceName> <VMSize>Large</VMSize> </DomainController> <NoWindowsAutoUpdate /> </Domain> <Database> <DBOption>LocalDB</DBOption> </Database> <HeadNode> <VMName>MyHeadNode</VMName> <ServiceName>MyHPCService</ServiceName> <VMSize>ExtraLarge</VMSize> </HeadNode> <Certificates> <Certificate> <Id>1</Id> <PfxFile>d:\mytestcert1.pfx</PfxFile> <Password>MyPsw!!2</Password> </Certificate> </Certificates> <ComputeNodes> <VMNamePattern>MyHPCCN-%0001%</VMNamePattern> <ServiceNamePattern>MyHPCCNService%01%</ServiceNamePattern> <MaxNodeCountPerService>5</MaxNodeCountPerService> <StorageAccountNamePattern>mycnstorage%01%</StorageAccountNamePattern> <VMSize>Medium</VMSize> <NodeCount>12</NodeCount> <ImageName HPCPackInstalled=”true”>MyHPCComputeNodeImage</ImageName> <VMExtensions> <VMExtension> <ExtensionName>BGInfo</ExtensionName> <Publisher>Microsoft.Compute</Publisher> <Version>1.*</Version> </VMExtension> </VMExtensions> </ComputeNodes> <AutoGrowShrink> <CertificateId>1</CertificateId> </AutoGrowShrink> </IaaSClusterConfig> Example 3 The following configuration file deploys an HPC Pack cluster in an existing domain forest. The cluster contains one head node, one database server with a 500 GB data disk, two broker nodes running the Windows Server 2012 R2 operating system, and five compute nodes running the Windows Server 2012 R2 operating system. The cloud service MyHPCCNService is created in the affinity group MyIBAffinityGroup, and the other cloud services are created in the affinity group MyAffinityGroup. The HPC Job Scheduler REST API and HPC web portal are enabled on the head node. <?xml version="1.0" encoding="utf-8" ?> <IaaSClusterConfig> <Subscription> <SubscriptionName>Subscription-1</SubscriptionName> <StorageAccount>mystorageaccount</StorageAccount> </Subscription> <AffinityGroup>MyAffinityGroup</AffinityGroup> <Location>East Asia</Location> <VNet> <VNetName>MyVNet</VNetName> <SubnetName>Subnet-1</SubnetName> </VNet> <Domain> <DCOption>ExistingDC</DCOption> <DomainFQDN>hpc.local</DomainFQDN> </Domain> <Database> <DBOption>NewRemoteDB</DBOption> <DBVersion>SQLServer2014_Enterprise</DBVersion> <DBServer> <VMName>MyDBServer</VMName> <ServiceName>MyHPCService</ServiceName> <VMSize>ExtraLarge</VMSize> <DataDiskSizeInGB>500</DataDiskSizeInGB> </DBServer> </Database> <HeadNode> <VMName>MyHeadNode</VMName> <ServiceName>MyHPCService</ServiceName> <VMSize>ExtraLarge</VMSize> <EnableRESTAPI /> <EnableWebPortal /> </HeadNode> <ComputeNodes> <VMNamePattern>MyHPCCN-%0000%</VMNamePattern> <ServiceName>MyHPCCNService</ServiceName> <VMSize>A8</VMSize> <NodeCount>5</NodeCount> <AffinityGroup>MyIBAffinityGroup</AffinityGroup> </ComputeNodes> <BrokerNodes> <VMNamePattern>MyHPCBN-%0000%</VMNamePattern> <ServiceName>MyHPCBNService</ServiceName> <VMSize>Medium</VMSize> <NodeCount>2</NodeCount> </BrokerNodes> </IaaSClusterConfig> Example 4 The following configuration file deploys an HPC Pack cluster in an existing domain forest. The cluster has two head node with local databases, two Azure node templates are created, and three size Medium Azure nodes are created for Azure node template AzureTemplate1. A script file runs on the head node after the head node is configured. <?xml version="1.0" encoding="utf-8" ?> <IaaSClusterConfig> <Subscription> <SubscriptionName>Subscription-1</SubscriptionName> <StorageAccount>mystorageaccount</StorageAccount> </Subscription> <AffinityGroup>MyAffinityGroup</AffinityGroup> <Location>East Asia</Location> <VNet> <VNetName>MyVNet</VNetName> <SubnetName>Subnet-1</SubnetName> </VNet> <Domain> <DCOption>ExistingDC</DCOption> <DomainFQDN>hpc.local</DomainFQDN> </Domain> <Database> <DBOption>LocalDB</DBOption> </Database> <HeadNode> <VMName>MyHeadNode</VMName> <ServiceName>MyHPCService</ServiceName> <VMSize>ExtraLarge</VMSize> <PostConfigScript>c:\MyHNPostActions.ps1</PostConfigScript> </HeadNode> <Certificates> <Certificate> <Id>1</Id> <PfxFile>d:\mytestcert1.pfx</PfxFile> <Password>MyPsw!!2</Password> </Certificate> <Certificate> <Id>2</Id> <PfxFile>d:\mytestcert2.pfx</PfxFile> </Certificate> </Certificates> <AzureBurst> <AzureNodeTemplate> <TemplateName>AzureTemplate1</TemplateName> <SubscriptionId>bb9252ba-831f-4c9d-ae14-9a38e6da8ee4</SubscriptionId> <CertificateId>1</CertificateId> <ServiceName>mytestsvc1</ServiceName> <StorageAccount>myteststorage1</StorageAccount> <NodeCount>3</NodeCount> <RoleSize>Medium</RoleSize> </AzureNodeTemplate> <AzureNodeTemplate> <TemplateName>AzureTemplate2</TemplateName> <SubscriptionId>ad4b9f9f-05f2-4c74-a83f-f2eb73000e0b</SubscriptionId> <CertificateId>1</CertificateId> <ServiceName>mytestsvc2</ServiceName> <StorageAccount>myteststorage2</StorageAccount> <Proxy> <UsesStaticProxyCount>false</UsesStaticProxyCount> <ProxyRatio>100</ProxyRatio> <ProxyRatioBase>400</ProxyRatioBase> </Proxy> <OSVersion>WindowsServer2012</OSVersion> </AzureNodeTemplate> </AzureBurst> </IaaSClusterConfig> Troubleshooting “VNet doesn’t exist” error - If you run the script to deploy multiple clusters in Azure concurrently under one subscription, one or more deployments may fail with the error “VNet VNet_Name doesn't exist”. If this error occurs, run the script again for the failed deployment. Problem accessing the Internet from the Azure virtual network - If you create a cluster with a new domain controller by using the deployment script, or you manually promote a head node VM to domain controller, you may experience problems connecting the VMs to the Internet. This problem can occur if a forwarder DNS server is automatically configured on the domain controller, and this forwarder DNS server doesn’t resolve properly. To work around this problem, log on to the domain controller and either remove the forwarder configuration setting or configure a valid forwarder DNS server. To configure this setting, in Server Manager click Tools > DNS to open DNS Manager, and then double-click Forwarders. Problem accessing RDMA network from compute-intensive VMs - If you add Windows Server compute or broker node VMs using an RDMA-capable size such as A8 or A9, you may experience problems connecting those VMs to the RDMA application network. One reason this problem occurs is if the HpcVmDrivers extension is not properly installed when the VMs are added to the cluster. For example, the extension might be stuck in the installing state. To work around this problem, first check the state of the extension in the VMs. If the extension is not properly installed, try removing the nodes from the HPC cluster and then add the nodes again. For example, you can add compute node VMs by running the Add-HpcIaaSNode.ps1 script on the head node. Next steps Try running a test workload on the cluster. For an example, see the HPC Pack getting started guide. For a tutorial to script the cluster deployment and run an HPC workload, see Get started with an HPC Pack cluster in Azure to run Excel and SOA workloads. Try HPC Pack's tools to start, stop, add, and remove compute nodes from a cluster you create. See Manage compute nodes in an HPC Pack cluster in Azure. To get set up to submit jobs to the cluster from a local computer, see Submit HPC jobs from an on-premises computer to an HPC Pack cluster in Azure. Set up a Windows RDMA cluster with HPC Pack to run MPI applications 4/3/2017 • 11 min to read • Edit Online Set up a Windows RDMA cluster in Azure with Microsoft HPC Pack and H-series or compute-intensive A-series instances to run parallel Message Passing Interface (MPI) applications. When you set up RDMA-capable, Windows Server-based nodes in an HPC Pack cluster, MPI applications communicate efficiently over a low latency, high throughput network in Azure that is based on remote direct memory access (RDMA) technology. If you want to run MPI workloads on Linux VMs that access the Azure RDMA network, see Set up a Linux RDMA cluster to run MPI applications. HPC Pack cluster deployment options Microsoft HPC Pack is a tool provided at no additional cost to create HPC clusters on-premises or in Azure to run Windows or Linux HPC applications. HPC Pack includes a runtime environment for the Microsoft implementation of the Message Passing Interface for Windows (MS-MPI). When used with RDMA-capable instances running a supported Windows Server operating system, HPC Pack provides an efficient option to run Windows MPI applications that access the Azure RDMA network. This article introduces two scenarios and links to detailed guidance to set up a Windows RDMA cluster with Microsoft HPC Pack. Scenario 1. Deploy compute-intensive worker role instances (PaaS) Scenario 2. Deploy compute nodes in compute-intensive VMs (IaaS) For general prerequisites to use compute-intensive instances with Windows, see About H-series and computeintensive A-series VMs. Scenario 1: Deploy compute-intensive worker role instances (PaaS) From an existing HPC Pack cluster, add extra compute resources in Azure worker role instances (Azure nodes) running in a cloud service (PaaS). This feature, also called “burst to Azure” from HPC Pack, supports a range of sizes for the worker role instances. When adding the Azure nodes, specify one of the RDMA-capable sizes. Following are considerations and steps to burst to RDMA-capable Azure instances from an existing (typically onpremises) cluster. Use similar procedures to add worker role instances to an HPC Pack head node that is deployed in an Azure VM. NOTE For a tutorial to burst to Azure with HPC Pack, see Set up a hybrid cluster with HPC Pack. Note the considerations in the following steps that apply specifically to RDMA-capable Azure nodes. Steps 1. Deploy and configure an HPC Pack 2012 R2 head node Download the latest HPC Pack installation package from the Microsoft Download Center. For requirements and instructions to prepare for an Azure burst deployment, see Burst to Azure Worker Instances with Microsoft HPC Pack. 2. Configure a management certificate in the Azure subscription Configure a certificate to secure the connection between the head node and Azure. For options and procedures, see Scenarios to Configure the Azure Management Certificate for HPC Pack. For test deployments, HPC Pack installs a Default Microsoft HPC Azure Management Certificate you can quickly upload to your Azure subscription. 3. Create a new cloud service and a storage account Use the Azure classic portal to create a cloud service and a storage account for the deployment in a region where the RDMA-capable instances are available. 4. Create an Azure node template Use the Create Node Template Wizard in HPC Cluster Manager. For steps, see Create an Azure node template in “Steps to Deploy Azure Nodes with Microsoft HPC Pack”. For initial tests, we suggest configuring a manual availability policy in the template. 5. Add nodes to the cluster Use the Add Node Wizard in HPC Cluster Manager. For more information, see Add Azure Nodes to the Windows HPC Cluster. When specifying the size of the nodes, select one of the RDMA-capable instance sizes. NOTE In each burst to Azure deployment with the compute-intensive instances, HPC Pack automatically deploys a minimum of two RDMA-capable instances (such as A8) as proxy nodes, in addition to the Azure worker role instances you specify. The proxy nodes use cores that are allocated to the subscription and incur charges along with the Azure worker role instances. 6. Start (provision) the nodes and bring them online to run jobs Select the nodes and use the Start action in HPC Cluster Manager. When provisioning is complete, select the nodes and use the Bring Online action in HPC Cluster Manager. The nodes are ready to run jobs. 7. Submit jobs to the cluster Use HPC Pack job submission tools to run cluster jobs. See Microsoft HPC Pack: Job Management. 8. Stop (deprovision) the nodes When you are done running jobs, take the nodes offline and use the Stop action in HPC Cluster Manager. Scenario 2: Deploy compute nodes in compute-intensive VMs (IaaS) In this scenario, you deploy the HPC Pack head node and cluster compute nodes on VMs in an Azure virtual network. HPC Pack provides several deployment options in Azure VMs, including automated deployment scripts and Azure quickstart templates. As an example, the following considerations and steps guide you to use the HPC Pack IaaS deployment script to automate the deployment of an HPC Pack 2012 R2 cluster in Azure. Steps 1. Create a cluster head node and compute node VMs by running the HPC Pack IaaS deployment script on a client computer Download the HPC Pack IaaS Deployment Script package from the Microsoft Download Center. To prepare the client computer, create the script configuration file, and run the script, see Create an HPC Cluster with the HPC Pack IaaS deployment script. To deploy RDMA-capable compute nodes, note the following additional considerations: Virtual network: Specify a new virtual network in a region in which the RDMA-capable instance size you want to use is available. Windows Server operating system: To support RDMA connectivity, specify a Windows Server 2012 R2 or Windows Server 2012 operating system for the compute node VMs. Cloud services: We recommend deploying your head node in one cloud service and your compute nodes in a different cloud service. Head node size: For this scenario, consider a size of at least A4 (Extra Large) for the head node. HpcVmDrivers extension: The deployment script installs the Azure VM Agent and the HpcVmDrivers extension automatically when you deploy size A8 or A9 compute nodes with a Windows Server operating system. HpcVmDrivers installs drivers on the compute node VMs so they can connect to the RDMA network. On RDMA-capable H-series VMs, you must manually install the HpcVmDrivers extension. See About H-series and compute-intensive A-series VMs. Cluster network configuration: The deployment script automatically sets up the HPC Pack cluster in Topology 5 (all nodes on the Enterprise network). This topology is required for all HPC Pack cluster deployments in VMs. Do not change the cluster network topology later. 2. Bring the compute nodes online to run jobs Select the nodes and use the Bring Online action in HPC Cluster Manager. The nodes are ready to run jobs. 3. Submit jobs to the cluster Connect to the head node to submit jobs, or set up an on-premises computer to do this. For information, see Submit Jobs to an HPC cluster in Azure. 4. Take the nodes offline and stop (deallocate) them When you are done running jobs, take the nodes offline in HPC Cluster Manager. Then, use Azure management tools to shut them down. Run MPI applications on the cluster Example: Run mpipingpong on an HPC Pack cluster To verify an HPC Pack deployment of the RDMA-capable instances, run the HPC Pack mpipingpong command on the cluster. mpipingpong sends packets of data between paired nodes repeatedly to calculate latency and throughput measurements and statistics for the RDMA-enabled application network. This example shows a typical pattern for running an MPI job (in this case, mpipingpong) by using the cluster mpiexec command. This example assumes you added Azure nodes in a “burst to Azure” configuration (Scenario 1. If you deployed HPC Pack on a cluster of Azure VMs, you’ll need to modify the command syntax to specify a different node group and set additional environment variables to direct network traffic to the RDMA network. To run mpipingpong on the cluster: 1. On the head node or on a properly configured client computer, open a Command Prompt. 2. To estimate latency between pairs of nodes in an Azure burst deployment of four nodes, type the following command to submit a job to run mpipingpong with a small packet size and many iterations: job submit /nodegroup:azurenodes /numnodes:4 mpiexec -c 1 -affinity mpipingpong -p 1:100000 -op -s nul The command returns the ID of the job that is submitted. If you deployed the HPC Pack cluster deployed on Azure VMs, specify a node group that contains compute node VMs deployed in a single cloud service, and modify the mpiexec command as follows: job submit /nodegroup:vmcomputenodes /numnodes:4 mpiexec -c 1 -affinity -env MSMPI_DISABLE_SOCK 1 -env MSMPI_PRECONNECT all -env MPICH_NETMASK 172.16.0.0/255.255.0.0 mpipingpong -p 1:100000 -op -s nul 3. When the job completes, to view the output (in this case, the output of task 1 of the job), type the following task view <JobID>.1 where <JobID> is the ID of the job that was submitted. The output includes latency results similar to the following. 4. To estimate throughput between pairs of Azure burst nodes, type the following command to submit a job to run mpipingpong with a large packet size and a few iterations: job submit /nodegroup:azurenodes /numnodes:4 mpiexec -c 1 -affinity mpipingpong -p 4000000:1000 -op -s nul The command returns the ID of the job that is submitted. On an HPC Pack cluster deployed on Azure VMs, modify the command as noted in step 2. 5. When the job completes, to view the output (in this case, the output of task 1 of the job), type the following: task view <JobID>.1 The output includes throughput results similar to the following. MPI application considerations Following are considerations for running MPI applications with HPC Pack in Azure. Some apply only to deployments of Azure nodes (worker role instances added in a “burst to Azure” configuration). Worker role instances in a cloud service are periodically reprovisioned without notice by Azure (for example, for system maintenance, or in case an instance fails). If an instance is reprovisioned while it is running an MPI job, the instance loses its data and returns to the state when it was first deployed, which can cause the MPI job to fail. The more nodes that you use for a single MPI job, and the longer the job runs, the more likely that one of the instances is reprovisioned while a job is running. Also consider this if you designate a single node in the deployment as a file server. To run MPI jobs in Azure, you don't have to use the RDMA-capable instances. You can use any instance size that is supported by HPC Pack. However, the RDMA-capable instances are recommended for running relatively largescale MPI jobs that are sensitive to the latency and the bandwidth of the network that connects the nodes. If you use other sizes to run latency- and bandwidth-sensitive MPI jobs, we recommend running small jobs, in which a single task runs on only a few nodes. Applications deployed to Azure instances are subject to the licensing terms associated with the application. Check with the vendor of any commercial application for licensing or other restrictions for running in the cloud. Not all vendors offer pay-as-you-go licensing. Azure instances need further setup to access on-premises nodes, shares, and license servers. For example, to enable the Azure nodes to access an on-premises license server, you can configure a site-to-site Azure virtual network. To run MPI applications on Azure instances, register each MPI application with Windows Firewall on the instances by running the hpcfwutil command. This allows MPI communications to take place on a port that is assigned dynamically by the firewall. NOTE For burst to Azure deployments, you can also configure a firewall exception command to run automatically on all new Azure nodes that are added to your cluster. After you run the hpcfwutil command and verify that your application works, add the command to a startup script for your Azure nodes. For more information, see Use a Startup Script for Azure Nodes. HPC Pack uses the CCP_MPI_NETMASK cluster environment variable to specify a range of acceptable addresses for MPI communication. Starting in HPC Pack 2012 R2, the CCP_MPI_NETMASK cluster environment variable only affects MPI communication between domain-joined cluster compute nodes (either on-premises or in Azure VMs). The variable is ignored by nodes added in a burst to Azure configuration. MPI jobs can't run across Azure instances that are deployed in different cloud services (for example, in burst to Azure deployments with different node templates, or Azure VM compute nodes deployed in multiple cloud services). If you have multiple Azure node deployments that are started with different node templates, the MPI job must run on only one set of Azure nodes. When you add Azure nodes to your cluster and bring them online, the HPC Job Scheduler Service immediately tries to start jobs on the nodes. If only a portion of your workload can run on Azure, ensure that you update or create job templates to define what job types can run on Azure. For example, to ensure that jobs submitted with a job template only run on Azure nodes, add the Node Groups property to the job template and select AzureNodes as the required value. To create custom groups for your Azure nodes, use the Add-HpcGroup HPC PowerShell cmdlet. Next steps As an alternative to using HPC Pack, develop with the Azure Batch service to run MPI applications on managed pools of compute nodes in Azure. See Use multi-instance tasks to run Message Passing Interface (MPI) applications in Azure Batch. If you want to run Linux MPI applications that access the Azure RDMA network, see Set up a Linux RDMA cluster to run MPI applications. 1 min to read • Edit O nline 1 min to read • Edit O nline 1 min to read • Edit O nline Install MongoDB on a Windows VM in Azure 3/27/2017 • 8 min to read • Edit Online IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and classic. This article covers using the classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. To install and configure MongoDB using the Resource Manager deployment model, see this article. MongoDB is a popular open-source, high-performance NoSQL database. This article guides you through creating a Windows Server virtual machine (VM) using the Azure portal. You then create and attach a data disk to the VM before installing and configuring MongoDB. If you have an existing VM in Azure that you would like to use, you can jump straight to installing and configuring MongoDB. Create a virtual machine running Windows Server Follow these instructions to create a virtual machine. 1. Sign in to the Azure portal. 2. Starting in the upper left, click New > Compute > Windows Server 2016 Datacenter. 3. On the Windows Server 2016 Datacenter, select the Classic deployment model. Click Create. 1. Basics blade The Basics blade requests administrative information for the virtual machine. 1. Enter a Name for the virtual machine. In the example, HeroVM is the name of the virtual machine. The name must be 1-15 characters long and it cannot contain special characters. 2. Enter a User name and a strong Password that are used to create a local account on the VM. The local account is used to sign in to and manage the VM. In the example, azureuser is the user name. The password must be 8-123 characters long and meet three out of the four following complexity requirements: one lower case character, one upper case character, one number, and one special character. See more about username and password requirements. 3. The Subscription is optional. One common setting is "Pay-As-You-Go". 4. Select an existing Resource group or type the name for a new one. In the example, HeroVMRG is the name of the resource group. 5. Select an Azure datacenter Location where you want the VM to run. In the example, East US is the location. 6. When you are done, click Next to continue to the next blade. 2. Size blade The Size blade identifies the configuration details of the VM, and lists various choices that include OS, number of processors, disk storage type, and estimated monthly usage costs. Choose a VM size, and then click Select to continue. In this example, DS1_V2 Standard is the VM size. 3. Settings blade The Settings blade requests storage and network options. You can accept the default settings. Azure creates appropriate entries where necessary. If you selected a virtual machine size that supports it, you can try Azure Premium Storage by selecting Premium (SSD) in Disk type. When you're done making changes, click OK. 4. Summary blade The Summary blade lists the settings specified in the previous blades. Click OK when you're ready to make the image. After the virtual machine is created, the portal lists the new virtual machine under All resources, and displays a tile of the virtual machine on the dashboard. The corresponding cloud service and storage account also are created and listed. Both the virtual machine and cloud service are started automatically and their status is listed as Running. NOTE You can add an endpoint for MongoDB while creating the virtual machine, and configure it as follows: name it as Mongo, use TCP as the protocol, and set both the public and private ports to 27017. Attach a data disk To provide storage for the virtual machine, attach a data disk and then initialize it so that Windows can use it. If you already have a data disk, you can attach that existing disk, or you can attach an empty disk. Attach an empty disk Attaching an empty disk is a simple way to add a data disk, because Azure creates the .vhd file for you and stores it in the storage account. 1. Click Virtual Machines (classic), and then select the appropriate VM. 2. In the Settings menu, click Disks. 3. On the command bar, click Attach new. The Attach new disk dialog box appears. Fill in the following information: In File Name, accept the default name or type another one for the .vhd file. The data disk uses an automatically generated name, even if you type another name for the .vhd file. Select the Type of the data disk. All virtual machines support standard disks. Many virtual machines also support premium disks. Select the Size (GB) of the data disk. For Host caching, choose none or Read Only. Click OK to finish. 4. After the data disk is created and attached, it's listed in the disks section of the VM. NOTE After you add a data disk, you need to log on to the VM and initialize the disk so that it can be used. How to: Attach an existing disk Attaching an existing disk requires that you have a .vhd available in a storage account. Use the Add-AzureVhd cmdlet to upload the .vhd file to the storage account. After you've created and uploaded the .vhd file, you can attach it to a VM. 1. Click Virtual Machines (classic), and then select the appropriate virtual machine. 2. In the Settings menu, click Disks. 3. On the command bar, click Attach existing. 4. Click Location. The available storage accounts display. Next, select an appropriate storage account from those listed. 5. A Storage account holds one or more containers that contain disk drives (vhds). Select the appropriate container from those listed. 6. The vhds panel lists the disk drives held in the container. Click one of the disks, and then click Select. 7. The Attach existing disk panel displays again, with the location containing the storage account, container, and selected hard disk (vhd) to add to the virtual machine. Set Host caching to none or Read only, then click OK. For instructions on initializing the disk, see "How to: Initialize a new data disk in Windows Server" in How to attach a data disk to a Windows virtual machine. Install and run MongoDB on the virtual machine Follow these steps to install and run MongoDB on a virtual machine running Windows Server. IMPORTANT MongoDB security features, such as authentication and IP address binding, are not enabled by default. Security features should be enabled before deploying MongoDB to a production environment. For more information, see Security and Authentication. 1. After you've connected to the virtual machine using Remote Desktop, open Internet Explorer from the Start menu on the virtual machine. 2. Select the Tools button in the upper right corner. In Internet Options, select the Security tab, and then select the Trusted Sites icon, and finally click the Sites button. Add https://\.mongodb.org* to the list of trusted sites. 3. Go to Downloads - MongoDB. 4. Find the Current Stable Release of Community Server, select the latest 64-bit version in the Windows column. Download, then run the MSI installer. 5. MongoDB is typically installed in C:\Program Files\MongoDB. Search for Environment Variables on the desktop and add the MongoDB binaries path to the PATH variable. For example, you might find the binaries at C:\Program Files\MongoDB\Server\3.4\bin on your machine. 6. Create MongoDB data and log directories in the data disk (such as drive F:) you created in the preceding steps. From Start, select Command Prompt to open a command prompt window. Type: C:\> F: F:\> mkdir \MongoData F:\> mkdir \MongoLogs 7. To run the database, run: F:\> C: C:\> mongod --dbpath F:\MongoData\ --logpath F:\MongoLogs\mongolog.log All log messages are directed to the F:\MongoLogs\mongolog.log file as mongod.exe server starts and preallocates journal files. It may take several minutes for MongoDB to preallocate the journal files and start listening for connections. The command prompt stays focused on this task while your MongoDB instance is running. 8. To start the MongoDB administrative shell, open another command window from Start and type the following commands: C:\> cd \my_mongo_dir\bin C:\my_mongo_dir\bin> mongo >db test > db.foo.insert( { a : 1 } ) > db.foo.find() { _id : ..., a : 1 } > show dbs ... > show collections ... > help The database is created by the insert. 9. Alternatively, you can install mongod.exe as a service: C:\> mongod --dbpath F:\MongoData\ --logpath F:\MongoLogs\mongolog.log --logappend --install A service is installed named MongoDB with a description of "Mongo DB". The --logpath option must be used to specify a log file, since the running service does not have a command window to display output. The --logappend option specifies that a restart of the service causes output to append to the existing log file. The --dbpath option specifies the location of the data directory. For more service-related command-line options, see Service-related command-line options. To start the service, run this command: C:\> net start MongoDB 10. Now that MongoDB is installed and running, you need to open a port in Windows Firewall so you can remotely connect to MongoDB. From the Start menu, select Administrative Tools and then Windows Firewall with Advanced Security. 11. a) In the left pane, select Inbound Rules. In the Actions pane on the right, select New Rule.... b) In the New Inbound Rule Wizard, select Port and then click Next. c) Select TCP and then Specific local ports. Specify a port of "27017" (the default port MongoDB listens on) and click Next. d) Select Allow the connection and click Next. e) Click Next again. f) Specify a name for the rule, such as "MongoPort", and click Finish. 12. If you didn't configure an endpoint for MongoDB when you created the virtual machine, you can do it now. You need both the firewall rule and the endpoint to be able to connect to MongoDB remotely. In the Azure portal, click Virtual Machines (classic), click the name of your new virtual machine, and then click Endpoints. 13. Click Add. 14. Add an endpoint with name "Mongo", protocol TCP, and both Public and Private ports set to "27017". Opening this port allows MongoDB to be accessed remotely. NOTE The port 27017 is the default port used by MongoDB. You can change this default port by specifying the --port parameter when starting the mongod.exe server. Make sure to give the same port number in the firewall and the "Mongo" endpoint in the preceding instructions. Summary In this tutorial, you learned how to create a virtual machine running Windows Server, remotely connect to it, and attach a data disk. You also learned how to install and configure MongoDB on the Windows-based virtual machine. You can now access MongoDB on the Windows-based virtual machine, by following the advanced topics in the MongoDB documentation. Install MySQL on a virtual machine created with the classic deployment model running Windows Server 2016 4/18/2017 • 5 min to read • Edit Online MySQL is a popular open source, SQL database. This tutorial shows you how to install and run the community version of MySQL 5.7.18 as a MySQL Server on a virtual machine running Windows Server 2016. Your experience might be slightly different for other versions of MySQL or Windows Server. For instructions on installing MySQL on Linux, refer to: How to install MySQL on Azure. IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. Create a virtual machine running Windows Server 2016 If you don't already have a VM running Windows Server 2016, you can use this tutorial to create the virtual machine. Attach a data disk After the virtual machine is created, you can optionally attach a data disk. Adding a data disk is recommended for production workloads and to avoid running out of space on the OS drive (C:), which includes the operating system. See How to attach a data disk to a Windows virtual machine and follow the instructions for attaching an empty disk. Set the host cache setting to None or Read-only. Log on to the virtual machine Next, you'll log on to the virtual machine so you can install MySQL. Install and run MySQL Community Server on the virtual machine Follow these steps to install, configure, and run the Community version of MySQL Server: NOTE When downloading items using Internet Explorer, you can set the IE Enhanced Security Configuration to Off, and simplify the downloading process. From the Start menu, click Administrative Tools/Server Manager/Local Server, then click IE Enhanced Security Configuration and set the configuration to Off). 1. After you've connected to the virtual machine using Remote Desktop, click Internet Explorer from the start screen. 2. Select the Tools button in the upper-right corner (the cogged wheel icon), and then click Internet Options. Click the Security tab, click the Trusted Sites icon, and then click the Sites button. Add http://.mysql.com to the 3. 4. 5. 6. 7. 8. 9. list of trusted sites. Click **Close, and then click **OK*. In the address bar of Internet Explorer, type https://dev.mysql.com/downloads/mysql/. Use the MySQL site to locate and download the latest version of the MySQL Installer for Windows. When choosing the MySQL Installer, download the version that has the complete file set (for example, the mysqlinstaller-community-5.7.18.0.msi with a file size of 352.8 MB), and save the installer. When the installer has finished downloading, click Run to launch setup. On the License Agreement page, accept the license agreement and click Next. On the Choosing a Setup Type page, click the setup type that you want, and then click Next. The following steps assume the selection of the Server only setup type. If the Check Requirements page displays, click Execute to let the installer install any missing components. Follow any instructions that display, such as the C++ Redistributable runtime. On the Installation page, click Execute. When installation is complete, click Next. 10. On the Product Configuration page, click Next. 11. On the Type and Networking page, specify your desired configuration type and connectivity options, including the TCP port if needed. Select Show Advanced Options, and then click Next. 12. On the Accounts and Roles page, specify a strong MySQL root password. Add additional MySQL user accounts as needed, and then click Next. 13. On the Windows Service page, specify changes to the default settings for running the MySQL Server as a Windows service as needed, and then click Next. 14. The choices on the Plugins and Extensions page are optional. Click Next to continue. 15. On the Advanced Options page, specify changes to logging options as needed, and then click Next. 16. On the Apply Server Configuration page, click Execute. When the configuration steps are complete, click Finish. 17. On the Product Configuration page, click Next. 18. On the Installation Complete page, click Copy Log to Clipboard if you want to examine it later, and then click Finish. 19. From the start screen, type mysql, and then click MySQL 5.7 Command-Line Client. 20. Enter the root password that you specified in step 12 and you are presented with a prompt where you can issue commands to configure MySQL. For the details of commands and syntax, see MySQL Reference Manuals. 21. You can also configure server configuration default settings, such as the base and data directories and drives. For more information, see 6.1.2 Server Configuration Defaults. Configure endpoints For the MySQL service to be available to client computers on the Internet, you must configure an endpoint for the TCP port and create a Windows Firewall rule. The default port value on which the MySQL Server service listens for MySQL clients is 3306. You can specify another port, as long as the port is consistent with the value supplied on the Type and Networking page (step 11 of the previous procedure). NOTE For production use, consider the security implications of making the MySQL Server service available to all computers on the Internet. You can define the set of source IP addresses that are allowed to use the endpoint with an Access Control List (ACL). For more information, see How to Set Up Endpoints to a Virtual Machine. To configure an endpoint for the MySQL Server service: 1. In the Azure portal, click Virtual Machines (classic), click the name of your MySQL virtual machine, and then click Endpoints. 2. In the command bar, click Add. 3. On the Add endpoint page, type a unique name for Name. 4. Select TCP as the protocol. 5. Type the port number, such as 3306, in both Public Port and Private Port, and then click OK. Add a Windows Firewall rule to allow MySQL traffic To add a Windows Firewall rule that allows MySQL traffic from the Internet, run the following command at an elevated Windows PowerShell command prompt on the MySQL server virtual machine. New-NetFirewallRule -DisplayName "MySQL57" -Direction Inbound –Protocol TCP –LocalPort 3306 -Action Allow Profile Public Test your remote connection To test your remote connection to the Azure VM running the MySQL Server service, you must provide the DNS name of the cloud service containing the VN. 1. In the Azure portal, click Virtual Machines (classic), click the name of your MySQL server virtual machine, and then click Overview. 2. From the virtual machine dashboard, note the DNS Name value. Here is an example: 3. From a local computer running MySQL or the MySQL client, run the following command to log in as a MySQL user. mysql -u -p -h For example, using the MySQL user name dbadmin3 and the testmysql.cloudapp.net DNS name for the virtual machine, you could start MySQL using the following command: mysql -u dbadmin3 -p -h testmysql.cloudapp.net Next steps To learn more about running MySQL, see the MySQL Documentation. Configuring Oracle Data Guard for Azure 3/27/2017 • 18 min to read • Edit Online This tutorial demonstrates how to set up and implement Oracle Data Guard in Azure Virtual Machines environment for high availability and disaster recovery. The tutorial focuses on one-way replication for non-RAC Oracle databases. Oracle Data Guard supports data protection and disaster recovery for Oracle Database. It is a simple, highperformance, drop-in solution for disaster recovery, data protection, and high availability for the entire Oracle database. This tutorial assumes that you already have theoretical and practical knowledge on Oracle Database High Availability and Disaster Recovery concepts. For information, see the Oracle web site and also the Oracle Data Guard Concepts and Administration Guide. In addition, the tutorial assumes that you have already implemented the following prerequisites: You’ve already reviewed the High Availability and Disaster Recovery Considerations section in the Oracle Virtual Machine images - Miscellaneous Considerations topic. Azure supports standalone Oracle Database instances but not Oracle Real Application Clusters (Oracle RAC) currently. You have created two Virtual Machines (VMs) in Azure using the same platform provided Oracle Enterprise Edition image. Make sure the Virtual Machines are in the same cloud service and in the same Virtual Network to ensure they can access each other over the persistent private IP address. Additionally, it is recommended to place the VMs in the same availability set to allow Azure to place them into separate fault domains and upgrade domains. Oracle Data Guard is only available with Oracle Database Enterprise Edition. Each machine must have at least 2 GB of memory and 5 GB of disk space. For the most up-to-date information on the platform provided VM sizes, see Virtual Machine Sizes for Azure. If you need additional disk volume for your VMs, you can attach additional disks. For information, see How to Attach a Data Disk to a Virtual Machine. You’ve set the Virtual Machine names as “Machine1” for the primary VM and “Machine2” for the standby VM at the Azure classic portal. You’ve set the ORACLE_HOME environment variable to point to the same oracle root installation path in the primary and standby Virtual Machines, such as C:\OracleDatabase\product\11.2.0\dbhome_1\database . You log on to your Windows server as a member of the Administrators group or a member of the ORA_DBA group. In this tutorial, you will: Implement the physical standby database environment 1. Create a primary database 2. Prepare the primary database for standby database creation a. b. c. d. e. Enable forced logging Create a password file Configure a standby redo log Enable Archiving Set primary database initialization parameters Create a physical standby database 1. Prepare an initialization parameter file for standby database 2. Configure the listener and tnsnames to support the database on primary and standby machines 3. 4. 5. 6. a. Configure listener.ora on both servers to hold entries for both databases b. To hold entries for both primary and standby databases, configure tnsnames.ora on the primary and standby Virtual Machines. c. Start the listener and check tnsping on both Virtual Machines to both services. Start up the standby instance in nomount state Use RMAN to clone the database and to create a standby database Start the physical standby database in managed recovery mode Verify the physical standby database IMPORTANT This tutorial has been set up and tested against the following hardware and software configuration: PRIM ARY DATABASE STANDBY DATABASE Oracle Release Oracle11g Enterprise Release (11.2.0.4.0) Oracle11g Enterprise Release (11.2.0.4.0) Machine Name Machine1 Machine2 Operating System Windows 2008 R2 Windows 2008 R2 Oracle SID TEST TEST_STBY Memory Min 2 GB Min 2 GB Disk Space Min 5 GB Min 5 GB For subsequent releases of Oracle Database and Oracle Data Guard, there might be some additional changes that you need to implement. For the most up-to-date version-specific information, see Data Guard and Oracle Database documentation at Oracle web site. Implement the physical standby database environment 1. Create a primary database Create a primary database “TEST” in the primary Virtual Machine. For information, see Creating and Configuring an Oracle Database. To see the name of your database, connect to your database as the SYS user with SYSDBA role in the SQL*Plus command prompt and run the following statement: SQL> select name from v$database; The result will display like the following: NAME --------TEST Next, query the names of the database files from the dba_data_files system view: SQL> select file_name from dba_data_files; FILE_NAME ------------------------------------------------------------------------------C:\ <YourLocalFolder>\TEST\USERS01.DBF C:\ <YourLocalFolder>\TEST\UNDOTBS01.DBF C:\ <YourLocalFolder>\TEST\SYSAUX01.DBF C:\<YourLocalFolder>\TEST\SYSTEM01.DBF C:\<YourLocalFolder>\TEST\EXAMPLE01.DBF 2. Prepare the primary database for standby database creation Before creating a standby database, it’s recommended that you ensure the primary database is configured properly. The following is a list of steps that you need to perform: 1. 2. 3. 4. 5. Enable forced logging Create a password file Configure a standby redo log Enable Archiving Set primary database initialization parameters Enable forced logging To implement a Standby Database, we need to enable 'Forced Logging' in the primary database. This option ensures that even if an 'nologging' operation is done, force logging takes precedence and all operations are logged into the redo logs. Therefore, we make sure that everything in the primary database is logged and replication to the standby includes all operations in the primary database. To enable force logging, run the alter database statement: SQL> ALTER DATABASE FORCE LOGGING; Database altered. Create a password file To be able to ship and apply archived logs from the Primary server to the Standby server, the sys password must be identical on both primary and standby servers. That’s why you create a password file on the primary database and copy it to the Standby server. IMPORTANT When using Oracle Database 12c, there is a new user, SYSDG, which you can use to administer Oracle Data Guard. For more information, see Changes in Oracle Database 12c Release. In addition, make sure that the ORACLE_HOME environment is already defined in Machine1. If not, define it as an environment variable using the Environment Variables dialog box. To access this dialog box, start the System utility by double-clicking the System icon in the Control Panel; then click the Advanced tab and choose Environment Variables. To set the environment variables, click the New button under System Variables. After setting up the environment variables, close the existing Windows command prompt and open up a new one. Run the following statement to switch to the Oracle_Home directory, such as C:\OracleDatabase\product\11.2.0\dbhome_1\database. cd %ORACLE_HOME%\database Then, create a password file using the password file creation utility, ORAPWD. In the same Windows command prompt in Machine1, run the following command by setting the password value as the password of SYS: ORAPWD FILE=PWDTEST.ora PASSWORD=password FORCE=y This command creates a password file, named as PWDTEST.ora, in the ORACLE_HOME\database directory. You should copy this file to %ORACLE_HOME%\database directory in Machine2 manually. Configure a standby redo log Then, you need to configure a Standby Redo Log so that the primary can correctly receive the redo when it becomes a standby. Pre-creating them here also allows the standby redo logs to be automatically created on the standby. It is important to configure the Standby Redo Logs (SRL) with the same size as the online redo logs. The size of the current standby redo log files must exactly match the size of the current primary database online redo log files. Run the following statement in the SQL*PLUS command prompt in Machine1. The v$logfile is a system view that contains information about redo log files. SQL> select * from v$logfile; GROUP# STATUS TYPE MEMBER IS_ ---------- ------- ------- ------------------------------------------------------------ --3 ONLINE C:\<YourLocalFolder>\TEST\REDO03.LOG NO 2 ONLINE C:\<YourLocalFolder>\TEST\REDO02.LOG NO 1 ONLINE C:\<YourLocalFolder>\TEST\REDO01.LOG NO Next, query the v$log system view, displays log file information from the control file. SQL> select bytes from v$log; BYTES ---------52428800 52428800 52428800 Note that 52428800 is 50 megabytes. Then, in the SQL*Plus window, run the following statements to add a new standby redo log file group to a standby database and specify a number that identifies the group using the GROUP clause. Using group numbers can make administering standby redo log file groups easier: SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 'C:\<YourLocalFolder>\TEST\REDO04.LOG' SIZE 50M; Database altered. SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 'C:\<YourLocalFolder>\TEST\REDO05.LOG' SIZE 50M; Database altered. SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 'C:\<YourLocalFolder>\TEST\REDO06.LOG' SIZE 50M; Database altered. Next, run the following system view to list information about redo log files. This operation also verifies that the standby redo log file groups were created: SQL> select * from v$logfile; GROUP# STATUS TYPE MEMBER IS_ ---------- ------- ------- --------------------------------------------- --3 ONLINE C:\<YourLocalFolder>\TEST\REDO03.LOG NO 2 ONLINE C:\<YourLocalFolder>\TEST\REDO02.LOG NO 1 ONLINE C:\<YourLocalFolder>\TEST\REDO01.LOG NO 4 STANDBY C:\<YourLocalFolder>\TEST\REDO04.LOG 5 STANDBY C:\<YourLocalFolder>\TEST\REDO05.LOG NO 6 STANDBY C:\<YourLocalFolder>\TEST\REDO06.LOG NO 6 rows selected. Enable Archiving Then, enable archiving by running the following statements to put the primary database in ARCHIVELOG mode and enable automatic archiving. You can enable archive log mode by mounting the database and then executing the archivelog command. First, log in as sysdba. In the Windows command prompt, run: sqlplus /nolog connect / as sysdba Then, shutdown the database in the SQL*Plus command prompt: SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. Then, execute the startup mount command to mount the database. This ensures that Oracle associates the instance with the specified database. SQL> startup mount; ORACLE instance started. Total System Global Area 1503199232 bytes Fixed Size 2281416 bytes Variable Size 922746936 bytes Database Buffers 570425344 bytes Redo Buffers 7745536 bytes Database mounted. Then, run: SQL> alter database archivelog; Database altered. Then, run the Alter database statement with the Open clause to make the database available for normal use: SQL> alter database open; Database altered. Set primary database initialization parameters To configure the Data Guard, you need to create and configure the standby parameters on a regular pfile (text initialization parameter file) first. When the pfile is ready, you need to convert it to a server parameter file (SPFILE). You can control the Data Guard environment using the parameters in the INIT.ORA file. When following this tutorial, you need to update the Primary database INIT.ORA so that it can hold both roles: Primary or Standby. SQL> create pfile from spfile; File created. Next, you need to edit the pfile to add the standby parameters. To do this, open the INITTEST.ORA file in the location of %ORACLE_HOME%\database. Next, append the following statements to the INITTEST.ora file. The naming convention for your INIT.ORA file is INIT<YourDatabaseName>.ORA. db_name='TEST' db_unique_name='TEST' LOG_ARCHIVE_CONFIG='DG_CONFIG=(TEST,TEST_STBY)' LOG_ARCHIVE_DEST_1= 'LOCATION=C:\OracleDatabase\archive VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=TEST' LOG_ARCHIVE_DEST_2= 'SERVICE=TEST_STBY LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=TEST_STBY' LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE LOG_ARCHIVE_FORMAT=%t_%s_%r.arc LOG_ARCHIVE_MAX_PROCESSES=30 # Standby role parameters -------------------------------------------------------------------fal_server=TEST_STBY fal_client=TEST standby_file_management=auto db_file_name_convert='TEST_STBY','TEST' log_file_name_convert='TEST_STBY','TEST' # --------------------------------------------------------------------------------------------- The previous statement block includes three important setup items: LOG_ARCHIVE_CONFIG...: You define the unique database ids using this statement. LOG_ARCHIVE_DEST_1...: You define the local archive folder location using this statement. We recommend you create a new directory for your database’s archiving needs and specify the local archive location using this statement explicitly rather than using Oracle’s default folder %ORACLE_HOME%\database\archive. LOG_ARCHIVE_DEST_2 .... LGWR ASYNC...: You define an asynchronous log writer process (LGWR) to collect transaction redo data and transmit it to standby destinations. Here, the DB_UNIQUE_NAME specifies a unique name for the database at the destination standby server. Once the new parameter file is ready, you need to create the spfile from it. First, shutdown the database: SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. Next, run startup nomount command as follows: SQL> startup nomount pfile='c:\OracleDatabase\product\11.2.0\dbhome_1\database\initTEST.ora'; ORACLE instance started. Total System Global Area 1503199232 bytes Fixed Size 2281416 bytes Variable Size 922746936 bytes Database Buffers 570425344 bytes Redo Buffers 7745536 bytes Now, create an spfile: SQL>create spfile frompfile='c:\OracleDatabase\product\11.2.0\dbhome\_1\database\initTEST.ora'; File created. Then, shutdown the database: SQL> shutdown immediate; ORA-01507: database not mounted Then, use the startup command to start an instance: SQL> startup; ORACLE instance started. Total System Global Area 1503199232 bytes Fixed Size 2281416 bytes Variable Size 922746936 bytes Database Buffers 570425344 bytes Redo Buffers 7745536 bytes Database mounted. Database opened. Create a physical standby database This section focuses on the steps that you must perform in Machine2 to prepare the physical standby database. First, you need to remote desktop to Machine2 via the Azure classic portal. Then, on the Standby Server (Machine2), create all the necessary folders for the standby database, such as C:\ <YourLocalFolder>\TEST. While following this tutorial, make sure that the folder structure matches the folder structure on Machine1 to keep all the necessary files, such as controlfile, datafiles, redologfiles, udump, bdump, and cdump files. In addition, define the ORACLE_HOME and ORACLE_BASE environment variables in Machine2. If not, define them as an environment variable using the Environment Variables dialog box. To access this dialog box, start the System utility by double-clicking the System icon in the Control Panel; then click the Advanced tab and choose Environment Variables. To set the environment variables, click the New button under the System Variables. After setting up the environment variables, you need to close the existing Windows command prompt and open up a new one to see the changes. Next, follow these steps: 1. Prepare an initialization parameter file for standby database 2. Configure the listener and tnsnames to support the database on primary and standby machines 3. 4. 5. 6. a. Configure listener.ora on both servers to hold entries for both databases b. Configure tnsnames.ora on the primary and standby Virtual Machines to hold entries for both primary and standby databases c. Start the listener and check tnsping on both Virtual Machines to both services. Start up the standby instance in nomount state Use RMAN to clone the database and to create a standby database Start the physical standby database in managed recovery mode Verify the physical standby database 1. Prepare an initialization parameter file for standby database This section demonstrates how to prepare an initialization parameter file for the standby database. To do this, first copy the INITTEST.ORA file from Machine 1 to Machine2 manually. You should be able to see the INITTEST.ORA file in the %ORACLE_HOME%\database folder in both machines. Then, modify the INITTEST.ora file in Machine2 to set it up for the standby role as specified below: db_name='TEST' db_unique_name='TEST_STBY' db_create_file_dest='c:\OracleDatabase\oradata\test_stby’ db_file_name_convert=’TEST’,’TEST_STBY’ log_file_name_convert='TEST','TEST_STBY' job_queue_processes=10 LOG_ARCHIVE_CONFIG='DG_CONFIG=(TEST,TEST_STBY)' LOG_ARCHIVE_DEST_1='LOCATION=c:\OracleDatabase\TEST_STBY\archives VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=’TEST' LOG_ARCHIVE_DEST_2='SERVICE=TEST LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) LOG_ARCHIVE_DEST_STATE_1='ENABLE' LOG_ARCHIVE_DEST_STATE_2='ENABLE' LOG_ARCHIVE_FORMAT='%t_%s_%r.arc' LOG_ARCHIVE_MAX_PROCESSES=30 The previous statement block includes two important setup items: *.LOG_ARCHIVE_DEST_1: You need to create the c:\OracleDatabase\TEST_STBY\archives folder in Machine2 manually. *.LOG_ARCHIVE_DEST_2: This is an optional step. You set this as it might be needed when the primary machine is in maintenance and the standby machine becomes a primary database. Then, you need to start the standby instance. On the standby database server, enter the following command at a Windows command prompt to create an Oracle instance by creating a Windows service: oradim -NEW -SID TEST\_STBY -STARTMODE MANUAL The Oradim command creates an Oracle instance but does not start it. You can find it in the C:\OracleDatabase\product\11.2.0\dbhome_1\BIN directory. Configure the listener and tnsnames to support the database on primary and standby machines Before you create a standby database, you need to make sure that the primary and standby databases in your configuration can talk to each other. To do this, you need to configure both the listener and TNSNames either manually or by using the network configuration utility NETCA. This is a mandatory task when you use the Recovery Manager utility (RMAN). Configure listener.ora on both servers to hold entries for both databases Remote desktop to Machine1 and edit the listener.ora file as specified below. When you edit the listener.ora file, always make sure that the opening and closing parenthesis line up in the same column. You can find the listener.ora file in the following folder: c:\OracleDatabase\product\11.2.0\dbhome_1\NETWORK\ADMIN\. # listener.ora Network Configuration File: C:\OracleDatabase\product\11.2.0\dbhome_1\network\admin\listener.ora # Generated by Oracle configuration tools. SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = test) (ORACLE_HOME = C:\OracleDatabase\product\11.2.0\dbhome_1) (PROGRAM = extproc) (ENVS = "EXTPROC_DLLS=ONLY:C:\OracleDatabase\product\11.2.0\dbhome_1\bin\oraclr11.dll") ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = MACHINE1)(PORT = 1521)) (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) ) Next, remote desktop to Machine2 and edit the listener.ora file as follows: # listener.ora Network Configuration File: C:\OracleDatabase\product\11.2.0\dbhome_1\network\admin\listener.ora # Generated by Oracle configuration tools. SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = test_stby) (ORACLE_HOME = C:\OracleDatabase\product\11.2.0\dbhome_1) (PROGRAM = extproc) (ENVS = "EXTPROC_DLLS=ONLY:C:\OracleDatabase\product\11.2.0\dbhome_1\bin\oraclr11.dll") ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = MACHINE2)(PORT = 1521)) (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) ) Configure tnsnames.ora on the primary and standby Virtual Machines to hold entries for both primary and standby databases Remote desktop to Machine1 and edit the tnsnames.ora file as specified below. You can find the tnsnames.ora file in the following folder: c:\OracleDatabase\product\11.2.0\dbhome_1\NETWORK\ADMIN\. TEST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = MACHINE1)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = test) ) ) TEST_STBY = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = MACHINE2)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = test_stby) ) ) Remote desktop to Machine2 and edit the tnsnames.ora file as follows: TEST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = MACHINE1)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = test) ) ) TEST_STBY = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = MACHINE2)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = test_stby) ) ) Start the listener and check tnsping on both Virtual Machines to both services. Open up a new Windows command prompt in both primary and standby Virtual Machines and run the following statements: C:\Users\DBAdmin>tnsping test TNS Ping Utility for 64-bit Windows: Version 11.2.0.1.0 - Production on 14-NOV-2013 06:29:08 Copyright (c) 1997, 2010, Oracle. All rights reserved. Used parameter files: C:\OracleDatabase\product\11.2.0\dbhome_1\network\admin\sqlnet.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = MACHINE1)(PORT = 1521))) (CONNECT_DATA = (SER VICE_NAME = test))) OK (0 msec) C:\Users\DBAdmin>tnsping test_stby TNS Ping Utility for 64-bit Windows: Version 11.2.0.1.0 - Production on 14-NOV-2013 06:29:16 Copyright (c) 1997, 2010, Oracle. All rights reserved. Used parameter files: C:\OracleDatabase\product\11.2.0\dbhome_1\network\admin\sqlnet.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = MACHINE2)(PORT = 1521))) (CONNECT_DATA = (SER VICE_NAME = test_stby))) OK (260 msec) Start up the standby instance in nomount state To set up the environment to support the standby database on the standby Virtual Machine (MACHINE2). First, copy the password file from the primary machine (Machine1) to the standby machine (Machine2) manually. This is necessary as the sys password must be identical on both machines. Then, open the Windows command prompt in Machine2, and setup the environment variables to point to the Standby database as follows: SET ORACLE_HOME=C:\OracleDatabase\product\11.2.0\dbhome_1 SET ORACLE_SID=TEST_STBY Next, start the Standby database in nomount state and then generate an spfile. Start the database: SQL>shutdown immediate; SQL>startup nomount ORACLE instance started. Total System Global Area 747417600 bytes Fixed Size 2179496 bytes Variable Size 473960024 bytes Database Buffers 264241152 bytes Redo Buffers 7036928 bytes Use RMAN to clone the database and to create a standby database You can use the Recovery Manager utility (RMAN) to take any backup copy of the primary database to create the physical standby database. Remote desktop to the standby Virtual Machine (MACHINE2) and run the RMAN utility by specifying a full connection string for both the TARGET (primary database, Machine1) and AUXILLARY (standby database, Machine2) instances. IMPORTANT Do not use the operating system authentication as there is no database in the standby server machine yet. C:\> RMAN TARGET sys/password@test AUXILIARY sys/password@test_STBY RMAN>DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE DORECOVER NOFILENAMECHECK; Start the physical standby database in managed recovery mode This tutorial demonstrates how to create a physical standby database. For information on creating a logical standby database, see the Oracle documentation. Open up SQL*Plus command prompt and enable the Data Guard on the standby Virtual Machine or server (MACHINE2) as follows: SHUTDOWN IMMEDIATE; STARTUP MOUNT; ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION; When you open the standby database in MOUNT mode, the archive log shipping continues and the managed recovery process continues log applying on the standby database. This ensures that the standby database remains up-to-date with the primary database. Note that the standby database cannot be accessible for reporting purposes during this time. When you open the standby database in READ ONLY mode, the archive log shipping continues. But the managed recovery process stops. This causes the standby database to become increasingly out of date until the managed recovery process is resumed. You can access the standby database for reporting purposes during this time but data may not reflect the latest changes. In general, we recommend that you keep the standby database in MOUNT mode to keep the data in the standby database up-to-date if there is a failure of the primary database. However, you can keep the standby database in READ ONLY mode for reporting purposes depending on your application’s requirements. The following steps demonstrate how to enable the Data Guard in read-only mode using SQL*Plus: SHUTDOWN IMMEDIATE; STARTUP MOUNT; ALTER DATABASE OPEN READ ONLY; Verify the physical standby database This section demonstrates how to verify the high availability configuration as an administrator. Open up SQL*Plus command prompt window and check archived redo log on the Standby Virtual Machine (Machine2): SQL> show parameters db_unique_name; NAME TYPE VALUE ------------------------------------ ----------- -----------------------------db_unique_name string TEST_STBY SQL> SELECT NAME FROM V$DATABASE SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME, APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#; SEQUENCE# FIRST_TIM NEXT_TIM APPLIED ---------------- --------------- --------------- -----------45 23-FEB-14 23-FEB-14 YES 45 23-FEB-14 23-FEB-14 NO 46 23-FEB-14 23-FEB-14 NO 46 23-FEB-14 23-FEB-14 YES 47 23-FEB-14 23-FEB-14 NO 47 23-FEB-14 23-FEB-14 NO Open up SQL*Plus command prompt window and switch logfiles on the Primary machine (Machine1): SQL> alter system switch logfile; System altered. SQL> archive log list Database log mode Automatic archival Archive destination Oldest online log sequence Next log sequence to archive Current log sequence Archive Mode Enabled C:\OracleDatabase\archive 69 71 71 Check archived redo log on the Standby Virtual Machine (Machine2): SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME, APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#; SEQUENCE# FIRST_TIM NEXT_TIM APPLIED ---------------- --------------- --------------- -----------45 23-FEB-14 23-FEB-14 YES 46 23-FEB-14 23-FEB-14 YES 47 23-FEB-14 23-FEB-14 YES 48 23-FEB-14 23-FEB-14 YES 49 50 23-FEB-14 23-FEB-14 23-FEB-14 23-FEB-14 YES IN-MEMORY Check for any gap on the Standby Virtual Machine (Machine2): SQL> SELECT * FROM V$ARCHIVE_GAP; no rows selected. Another verification method could be to failover to the standby database and then test if it is possible to failback to the primary database. To activate the standby database as a primary database, use the following statements: SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH; SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE; If you have not enabled flashback on the original primary database, it’s recommended that you drop the original primary database and recreate as a standby database. We recommend that you enable flashback database on the primary and the standby databases. When a failover happens, the primary database can be flashed back to the time before the failover and quickly converted to a standby database. Additional Resources Oracle Virtual Machine images for Azure Configuring Oracle GoldenGate for Azure 3/27/2017 • 17 min to read • Edit Online This tutorial demonstrates how to setup Oracle GoldenGate for Azure Virtual Machines environment for high availability and disaster recovery. The tutorial focuses on bi-directional replication for non-RAC Oracle databases and requires that both sites are active. Oracle GoldenGate supports data distribution and data integration. It enables you to set up a data distribution and data synchronization solution through the Oracle-Oracle replication configuration, and provides a flexible high availability solution. Oracle GoldenGate supplements Oracle Data Guard with its replication capabilities to enable enterprise-wide information distribution and zero-downtime upgrades and migrations. For detailed information, see Using Oracle GoldenGate with Oracle Data Guard. Oracle GoldenGate contains the following main components: Extract, Data pump, Replicat, Trails or extract files, Checkpoints, Manager and Collector. To have bi-directional replication between two sites, you need to create and start all components on both sites. For detailed information on Oracle GoldenGate architecture, see Oracle GoldenGate Guide. This tutorial assumes that you already have theoretical and practical knowledge on Oracle Database High Availability and Disaster Recovery concepts as well as Oracle GoldenGate. For more information, see the Oracle web site. In addition, the tutorial assumes that you have already implemented the following prerequisites: You’ve already reviewed the High Availability and Disaster Recovery Considerations section in the Oracle Virtual Machine images - Miscellaneous Considerations topic. Note that Azure supports standalone Oracle Database instances but not Oracle Real Application Clusters (Oracle RAC) currently. You’ve downloaded the Oracle GoldenGate software from the Oracle Downloads web site. You’ve selected the Product Pack Oracle Fusion Middleware – Data Integration. Then, you’ve selected Oracle GoldenGate on Oracle v11.2.1 Media Pack for Microsoft Windows x64 (64-bit) for an Oracle 11g database. Next, download Oracle GoldenGate V11.2.1.0.3 for Oracle 11g 64bit on Windows 2008 (64bit). You have created two Virtual Machines (VMs) in Azure using Oracle Enterprise Edition on Windows Server. Make sure that the Virtual Machines are in the same cloud service and in the same Virtual Network to ensure they can access each other over the persistent private IP address. You’ve set the Virtual Machine names as “MachineGG1” for Site A and “MachineGG2” for Site B at the Azure classic portal. You’ve created test databases “TestGG1” on Site A and “TestGG2” on Site B. You log on to your Windows server as a member of the Administrators group or a member of the ORA_DBA group. In this tutorial, you will: 1. Setup database on Site A and Site B 2. 3. 4. 5. a. Perform initial data load Prepare Site A and Site B for database replication Create all necessary objects to support DDL Replication Configure GoldenGate Manager on Site A and Site B Create Extract Group and Data Pump processes on Site A and Site B a. Create Extract and Data Pump processes on Site A b. Create a GoldenGate checkpoint table on Site B c. Add REPLICAT on Site B d. Create Extract and Data Pump processes on Site B e. Create a GoldenGate checkpoint table on Site A f. Add REPLICAT on Site A g. Add trandata on Site A and Site B h. Start Extract and Data Pump processes on Site A i. Start Extract and Data Pump processes on Site B j. Start REPLICAT process on Site A k. Start REPLICAT process on Site B 6. Verify the bi-directional replication process IMPORTANT This tutorial has been setup and tested against the following software configuration: SITE A DATABASE SITE B DATABASE Oracle Release Oracle11g Release 2 – (11.2.0.1) Oracle11g Release 2 – (11.2.0.1) Machine Name MachineGG1 MachineGG2 Operating System Windows 2008 R2 Windows 2008 R2 Oracle SID TESTGG1 TESTGG2 Replication Schema SCOTT SCOTT For subsequent releases of Oracle Database and Oracle GoldenGate, there might be some additional changes that you need to implement. For the most up-to-date version specific information, see Oracle GoldenGate and Oracle Database documentation at Oracle web site. For example, for a release 11.2.0.4 source database and later, the capture of DDL is performed by the logmining server asynchronously and requires no special triggers, tables, or other database objects to be installed. Oracle GoldenGate upgrades can be performed without stopping user applications. The use of a DDL trigger and supporting objects is required when Extract is in integrated mode with an Oracle 11g source database that is earlier than version 11.2.0.4. For detailed guidance, see Installing and Configuring Oracle GoldenGate for Oracle Database. 1. Setup database on Site A and Site B This section explains how to perform the database prerequisites on both Site A and Site B. You must perform all the steps of this section on both sites: Site A and Site B. First, remote desktop to Site A and Site B via the Azure classic portal. Open up a Windows command prompt and create a home directory for Oracle GoldenGate setup files: mkdir C:\OracleGG Then, unzip and install the Oracle GoldenGate software in this folder. After this step, you can initiate the GoldenGate Software Command Interpreter (GGSCI) by executing the following command: C:\OracleGG\.\ggsci You can use GGSCI to run several commands that configure, control, and monitor Oracle GoldenGate. Next, run the following command to create all sub-folders from the installation package: GGSCI (Hostname) 1> CREATE SUBDIRS Run the following command to exit the GGSCI command prompt: GGSCI (Hostname) 1> EXIT Then, you need to create a database user, which will be used by the Oracle GoldenGate Manager, Extract and Replication processes. Note that you can create individual users for each process or configure only one common user. In this tutorial, we create one user, which is called as ggate. Then, we grant that user the necessary privileges. Note that you must perform the following operations on Site A and Site B. Open up SQL*Plus command window on Site A and Site B with database administrator privileges using SYSDBA, such as: Enter username: / as sysdba And run: SQL> create 200m; SQL> create grant grant grant grant grant grant grant grant grant grant grant tablespace ggs_data datafile 'c:\OracleDatabase\oradata\<DBNAME>\<DBNAME>ggs_data01.dbf' size user ggate identified by ggate default tablespace ggs_data temporary tablespace temp; connect, resource to ggate; select any dictionary, select any table to ggate; create table to ggate; flashback any table to ggate; execute on dbms_flashback to ggate; execute on utl_file to ggate; create any table to ggate; insert any table to ggate; update any table to ggate; delete any table to ggate; drop any table to ggate; Next, locate the INIT<DatabaseSID>.ORA file in the %ORACLE_HOME%\database folder on Site A and Site B and append the following database parameters to INITTEST.ora: UNDO\_MANAGEMENT=AUTO UNDO\_RETENTION=86400 For a full list of all Oracle GoldenGate GGSCI commands, see Reference for Oracle GoldenGate for Windows. Perform initial data load You can perform the initial data load in the database by following several methods. For example, you can use the Oracle GoldenGate Direct Load or regular Export and Import utilities to export table data from Site A to Site B. To demonstrate the Oracle GoldenGate replication process, this tutorial demonstrates creating a table on both Site A and site B by using the following commands. First, open up SQL*Plus command window and run the following command to create an inventory table on Site A and Site B databases: create table scott.inventory (prod_id number, prod_category varchar2(20), qty_in_stock number, last_dml timestamp default systimestamp); Next, add a constraint to the newly created table on Site A and Site B databases: alter table scott.inventory add constraint pk_inventory primary key (prod_id) ; Then, grant all privileges on the new inventory table to the user ggate on Site A and Site B: grant all on scott.inventory to ggate; Next, create and enable a database trigger, INVENTORY_CDR_TRG, on the newly created table to make sure that all transactions to the new table are recorded if the user is not ggate. Perform this operation on Site A and Site B. CREATE OR REPLACE TRIGGER INVENTORY_CDR_TRG BEFORE UPDATE ON SCOTT.INVENTORY REFERENCING NEW AS New OLD AS Old FOR EACH ROW BEGIN IF SYS_CONTEXT ('USERENV', 'SESSION_USER') != 'GGATE' THEN :NEW.LAST_DML := SYSTIMESTAMP; END IF; END; / 2. Prepare Site A and Site B for database replication This section explains how to prepare Site A and Site B for database replication. You must perform all the steps of this section on both sites: Site A and Site B. First, remote desktop to Site A and Site B via the Azure classic portal. Switch the database to archivelog mode using the SQL*Plus command window: sql>shutdown immediate sql>startup mount sql>alter database archivelog; sql>alter database open; Then, enable minimal supplemental logging as follows: SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS; Next, prepare the database to support DDL (data definition language) replication: SQL> alter system set recyclebin=off scope=spfile; Then, shutdown and restart the database: sql>shutdown immediate sql>startup 3. Create all necessary objects to support DDL Replication This section lists the scripts that you need to use to create all necessary objects to support DDL Replication. You need to run the scripts specified in this section on both Site A and Site B. Open up a Windows command prompt and navigate to the Oracle GoldenGate folder, such as C:\OracleGG. Start SQL*Plus command prompt with database administrator privileges, such as using SYSDBA on Site A and Site B. Then, run the following scripts: SQL> @marker_setup.sql Enter GoldenGate schema name: ggate SQL> @ddl_setup.sql Enter GoldenGate schema name: ggate SQL> @role_setup.sql Enter GoldenGate schema name: ggate SQL> grant ggs_ggsuser_role to ggate; Grant succeeded. SQL> @ddl_enable Trigger altered. SQL> @ddl_pin ggate Oracle GoldenGate tool requires a table level login for DDL (data definition language) support. That’s why, enable supplemental logging at the table level by using the ADD TRANDATA command. Open up Oracle GoldenGate Command interpreter window, login to database, and then run the ADD TRANDATA command: GGSCI 5> DBLOGIN USERID ggate, PASSWORD ggate GGSCI(Hostname) 6> add trandata scott.inventory 4. Configure GoldenGate Manager on Site A and Site B The Oracle GoldenGate Manager performs a number of functions like starting the other GoldenGate processes, trail log file management and reporting. You need to configure the Oracle GoldenGate Manager process on both Site A and Site B. To do this, perform the following steps on Site A and Site B. Open Windows command window and initiate the Oracle GoldenGate command interpreter: cd C:\OracleGG\ c:\OracleGG>ggsci Oracle GoldenGate Command Interpreter for Oracle Version 11.2.1.0.3 14400833 OGGCORE_11.2.1.0.3_PLATFORMS_120823.1258 Windows x64 (optimized), Oracle 11g on Aug 23 2012 16:50:36 Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved. Logs the GGSCI session into a database so that you can execute commands that affect the database: GGSCI (HostName) 1> DBLOGIN USERID ggate, PASSWORD ggate Successfully logged into database. Display the status and lag (where relevant) for all Manager, Extract, and Replicat processes on a system: GGSCI (HostName) 2> info all Program Status Group MANAGER STOPPED Lag Time Since Chkpt Open the parameter file using the EDIT PARAMS command and then append the following information: GGSCI (HostName) 3> edit params mgr PORT 7809 USERID ggate, PASSWORD ggate PURGEOLDEXTRACTS C:\OracleGG\dirdat\ex, USECHECKPOINTS Display the status and lag (where relevant) for all Manager, Extract, and Replicat processes on a system: GGSCI (HostName) 46> info all Program Status Group MANAGER STOPPED Lag Time Since Chkpt Logs the GGSCI session into a database so that you can execute commands that affect the database: GGSCI (HostName) 47> dblogin USERID ggate, PASSWORD ggate Successfully logged into database. Start the manager process: GGSCI (HostName) 48> start manager Manager started. 5. Create Extract Group and Data Pump processes on Site A and Site B Create Extract and Data Pump processes on Site A You need to create the Extract and Data Pump processes on Site A and Site B. Remote desktop to Site A and Site B via the Azure classic portal. Open up GGSCI command interpreter window. Run the following commands on Site A: GGSCI (MachineGG1) EXTRACT added. GGSCI (MachineGG1) EXTTRAIL added. GGSCI (MachineGG1) EXTRACT added. GGSCI (MachineGG1) RMTTRAIL added. 14> add extract ext1 tranlog begin now 4> add exttrail C:\OracleGG\dirdat\lt, extract ext1 16> add extract dpump1 exttrailsource C:\OracleGG\dirdat\aa 17> add rmttrail C:\OracleGG\dirdat\ab extract dpump1 Open the parameter file using the EDIT PARAMS command and then append the following information: GGSCI (MachineGG1) 18> edit params ext1 EXTRACT ext1 USERID ggate, PASSWORD ggate EXTTRAIL C:\OracleGG\dirdat\aa TRANLOGOPTIONS EXCLUDEUSER ggate TABLE scott.inventory, GETBEFORECOLS ( ON UPDATE KEYINCLUDING (prod_category,qty_in_stock, last_dml), ON DELETE KEYINCLUDING (prod_category,qty_in_stock, last_dml)); Open the parameter file using the EDIT PARAMS command and then append the following information: GGSCI (MachineGG1) 15> edit params dpump1 EXTRACT dpump1 USERID ggate, PASSWORD ggate RMTHOST ActiveGG2orcldb, MGRPORT 7809, TCPBUFSIZE 100000 RMTTRAIL C:\OracleGG\dirdat\ab PASSTHRU TABLE scott.inventory; Create a GoldenGate checkpoint table on Site B Next, you need to add a checkpoint table on Site B. To do this, you need to open up a GoldenGate command interpreter window and run: C:\OracleGG\ggsci GGSCI (MachineGG2) 1> DBLOGIN USERID ggate, PASSWORD ggate Successfully logged into database. And then, add the checkpoint table to the database, where ggate is the owner: GGSCI (MachineGG2) 2> ADD CHECKPOINTTABLE ggate.checkpointtable Successfully created checkpoint table ggate.checkpointtable. Add the name of the check point table to the GLOBALS file on the target server, which is Site B in this step. Edit the GLOBALS file on Site B: GGSCI (MachineGG2) 1> EDIT PARAMS ./GLOBALS Then, append the CHECKPOINTTABLE parameter to the existing GLOBALS file: GGSCHEMA ggate CHECKPOINTTABLE ggate.checkpointtable As a final step, save and close the GLOBALS parameter file. Add REPLICAT on Site B This section describes how to add a REPLICAT process “REP2” on Site B. Use ADD REPLICAT command to create a Replicat group on Site B: GGSCI (MachineGG2) 37> add replicat rep2 exttrail C:\OracleGG\dirdatab, checkpointtable ggate.checkpointtable Open the parameter file using the EDIT PARAMS command and then append the following information: GGSCI (MachineGG2) 10> edit params rep2 REPLICAT rep2 ASSUMETARGETDEFS USERID ggate, PASSWORD ggate DISCARDFILE C:\OracleGG\dirdat\discard.txt, append,megabytes 10 MAP scott.inventory, TARGET scott.inventory; Create Extract and Data Pump processes on Site B This section describes how to create a new extract process “EXT2” and a new data pump process “DPUMP2” on Site B: GGSCI (MachineGG2) EXTRACT added. GGSCI (MachineGG2) EXTTRAIL added. GGSCI (MachineGG2) EXTRACT added. GGSCI (MachineGG2) RMTTRAIL added. 3> add extract ext2 tranlog begin now 4> add exttrail C:\OracleGG\dirdat\ac extract ext2 5> add extract dpump2 exttrailsource C:\OracleGG\dirdat\ac 6> add rmttrail C:\OracleGG\dirdat\ad extract dpump2 Open the parameter file using the EDIT PARAMS command and then append the following information: GGSCI (MachineGG2) 31> edit params ext2 EXTRACT ext2 USERID ggate, PASSWORD ggate EXTTRAIL C:\OracleGG\dirdat\ac TRANLOGOPTIONS EXCLUDEUSER ggate TABLE scott.inventory, GETBEFORECOLS ( ON UPDATE KEYINCLUDING (prod_category,qty_in_stock, last_dml), ON DELETE KEYINCLUDING (prod_category,qty_in_stock, last_dml)); Open the parameter file using the EDIT PARAMS command and then append the following information: GGSCI (MachineGG2) 32> edit params dpump2 EXTRACT dpump2 USERID ggate, PASSWORD ggate RMTHOST MachineGG1, MGRPORT 7809, TCPBUFSIZE 100000 RMTTRAIL C:\OracleGG\dirdat\ad PASSTHRU TABLE scott.inventory; Create a GoldenGate checkpoint table on Site A Open up Oracle GoldenGate command interpreter window and create a checkpoint table: GGSCI (MachineGG1) 1> DBLOGIN USERID ggate, PASSWORD ggate Successfully logged into database. GGSCI (MachineGG1) 2> ADD CHECKPOINTTABLE ggate.checkpointtable Successfully created checkpoint table ggate.checkpointtable. You also need to add the name of the check point table to the GLOBALS file on Site A. Open up Oracle GoldenGate command interpreter window and edit the GLOBALS file on Site A: GGSCI (MachineGG1) 1> EDIT PARAMS ./GLOBALS Add the CHECKPOINTTABLE parameter to the existing GLOBALS file as follows: GGSCHEMA ggate CHECKPOINTTABLE ggate.checkpointtable Save and close the GLOBALS parameter file. Add REPLICAT on Site A This section describes how to add a REPLICAT process “REP1” on Site A. The following command creates a Replicat group rep1 with the name of a trail, and the associated checkpointtable. GGSCI (MachineGG1) 21> add replicat rep1 exttrail C:\OracleGG\dirdat\ad,checkpointtable ggate.checkpointtable REPLICAT added. Open the parameter file using the EDIT PARAMS command and then append the following information: GGSCI (MachineGG1) 10> edit params rep1 REPLICAT rep1 ASSUMETARGETDEFS USERID ggate, PASSWORD ggate DISCARDFILE C:\OracleGG\dirdat\discard.txt, append, megabytes 10 MAP scott.inventory, TARGET scott.inventory; Add trandata on Site A and Site B Enable supplemental logging at the table level by using the ADD TRANDATA command. Open up Oracle GoldenGate Command interpreter window, login to database, and then run the ADD TRANDATA command. Remote desktop to MachineGG1, open up Oracle GoldenGate command interpreter, and run: GGSCI (MachineGG1) 11> dblogin userid ggate password ggate Successfully logged into database. GGSCI (MachineGG1) 12> add trandata scott.inventory cols (prod_category,qty_in_stock, last_dml) GGSCI (MachineGG1) 13> info trandata scott.inventory Logging of supplemental redo log data is enabled for table SCOTT.INVENTORY. Columns supplementally logged for table SCOTT.INVENTORY: PROD_ID, PROD_CATEGORY, QTY_IN_STOCK, LAST_DML. Remote desktop to MachineGG2, open up Oracle GoldenGate command interpreter, and run: GGSCI (MachineGG2) 18> dblogin userid ggate password ggate Successfully logged into database. GGSCI (MachineGG2) 14> add trandata scott.inventory cols (prod_category,qty_in_stock, last_dml) Logging of supplemental redo data enabled for table SCOTT.INVENTORY. Display information about the state of table-level supplemental logging: GGSCI (MachineGG2) 15> info trandata scott.inventory Logging of supplemental redo log data is enabled for table SCOTT.INVENTORY. Columns supplementally logged for table SCOTT.INVENTORY: PROD_ID, PROD_CATEGORY, QTY_IN_STOCK, LAST_DML. Add trandata on Site A and Site B Enable supplemental logging at the table level by using the ADD TRANDATA command. Open up Oracle GoldenGate Command interpreter window, login to database, and then run the ADD TRANDATA command. Remote desktop to MachineGG1, open up Oracle GoldenGate command interpreter, and run: GGSCI (MachineGG1) 11> dblogin userid ggate password ggate Successfully logged into database. GGSCI (MachineGG1) 12> add trandata scott.inventory cols (prod_category,qty_in_stock, last_dml) GGSCI (MachineGG1) 13> info trandata scott.inventory Logging of supplemental redo log data is enabled for table SCOTT.INVENTORY. Columns supplementally logged for table SCOTT.INVENTORY: PROD_ID, PROD_CATEGORY, QTY_IN_STOCK, LAST_DML. Remote desktop to MachineGG2, open up Oracle GoldenGate command interpreter, and run: GGSCI (MachineGG2) 18> dblogin userid ggate password ggate Successfully logged into database. GGSCI (MachineGG2) 14> add trandata scott.inventory cols (prod_category,qty_in_stock, last_dml) Logging of supplemental redo data enabled for table SCOTT.INVENTORY. Display information about the state of table-level supplemental logging: GGSCI (MachineGG2) 15> info trandata scott.inventory Logging of supplemental redo log data is enabled for table SCOTT.INVENTORY. Columns supplementally logged for table SCOTT.INVENTORY: PROD_ID, PROD_CATEGORY, QTY_IN_STOCK, LAST_DML. Start Extract and Data Pump processes on Site A Start the Extract process ext1 on Site A: GGSCI (MachineGG1) 31> start extract ext1 Sending START request to MANAGER … EXTRACT EXT1 starting Start the data pump process dpump1 on Site A: GGSCI (MachineGG1) 23> start extract dpump1 Sending START request to MANAGER … EXTRACT DPUMP1 starting Display information about the Extract group ext1: GGSCI (MachineGG1) 32> info extract ext1 EXTRACT EXT1 Last Started 2013-11-25 08:03 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:02 ago) Log Read Checkpoint Oracle Redo Logs 2013-11-25 08:03:18 Seqno 6, RBA 3230720 SCN 0.1074371 (1074371) Display the status and lag (where relevant) for all Manager, Extract, and Replicat processes on a system: GGSCI (MachineGG1) 16> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER EXTRACT EXTRACT 00:00:00 00:00:00 RUNNING RUNNING RUNNING DPUMP1 EXT1 00:46:33 00:00:04 Start Extract and Data Pump processes on Site B Start the Extract process ext2 on Site B: GGSCI (MachineGG2) 22> start extract ext2 Sending START request to MANAGER … EXTRACT EXT2 starting Start the data pump process dpump2 on Site B: GGSCI (MachineGG2) 23> start extract dpump2 Sending START request to MANAGER … EXTRACT DPUMP2 starting Display the status and lag (where relevant) for all Manager, Extract, and Replicat processes on a system: GGSCI (ActiveGG2orcldb) 6> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER EXTRACT EXTRACT RUNNING RUNNING RUNNING DPUMP2 EXT2 00:00:00 00:00:00 136:13:33 00:00:04 Start REPLICAT process on Site A This section describes how to start the REPLICAT process “REP1” on Site A. Start the Replicat process on Site A: GGSCI (MachineGG1) 38> start replicat rep1 Sending START request to MANAGER … REPLICAT REP1 starting Display the status of a Replicat group: GGSCI (MachineGG1) 39> status replicat rep1 REPLICAT REP1: RUNNING Start REPLICAT process on Site B This section describes how to start the REPLICAT process “REP2” on Site B. Start the Replicat process on Site B: GGSCI (MachineGG2) 26> start replicat rep2 Sending START request to MANAGER … REPLICAT REP2 starting Display the status of a Replicat group: GGSCI (MachineGG2) 27> status replicat rep2 REPLICAT REP2: RUNNING 6. Verify the bi-directional replication process To verify the Oracle GoldenGate configuration, insert a row into the database at Site A. Remote Desktop to Site A. Open up SQL*Plus command window and run: SQL> select name from v$database; NAME ——— TESTGG SQL> insert into inventory values (100,’TV’,100,sysdate); 1 row created. SQL> commit; Commit complete. Then, check if that row is replicated on Site B. To do this, remote desktop to Site B. Open up SQL Plus window and run: SQL> select name from v$database; NAME ——— TESTGG SQL> select * from inventory; PROD_ID PROD_CATEGORY QTY_IN_STOCK LAST_DML ———- ——————– ———— ——— 100 TV 100 22-MAR-13 Insert a new record at Site B: SQL> insert into inventory values (101,’DVD’,10,sysdate); 1 row created. SQL> commit; Commit complete. Remote desktop to Site A and check if the replication has taken place: SQL> select * from inventory; PROD_ID PROD_CATEGORY QTY_IN_STOCK LAST_DML ———- ——————– ———— ——— 100 TV 100 22-MAR-13 101 DVD 10 22-MAR-13 Additional Resources Oracle Virtual Machine images for Azure Miscellaneous considerations for Oracle virtual machine images 4/12/2017 • 8 min to read • Edit Online This article covers considerations for Oracle virtual machines in Azure, which are based on Oracle software images provided by Oracle. Oracle Database virtual machine images Oracle WebLogic Server virtual machine images Oracle JDK virtual machine images Oracle Database virtual machine images No static internal IP Azure assigns each virtual machine an internal IP address. Unless the virtual machine is part of a virtual network, the IP address of the virtual machine is dynamic and might change after the virtual machine restarts. This can cause issues because the Oracle Database expects the IP address to be static. To avoid the issue, consider adding the virtual machine to an Azure Virtual Network. See Virtual Network and Create a virtual network in Azure for more information. Attached disk configuration options Attached disks rely on the Azure Blob storage service. Each standard disk is capable of a theoretical maximum of approximately 500 input/output operations per second (IOPS). Our premium disk offering is preferred for high performance database workloads and can achieve up to 5000 IOps per disk. While you can use a single disk if that meets your performance needs - you can improve the effective IOPS performance if you use multiple attached disks, spread database data across them, and then use Oracle Automatic Storage Management (ASM). See Oracle Automatic Storage overview for more information. Although it is possible to use striping of multiple disks at the operating system level, there are trade offs you make using either of these routes. Consider two different approaches for attaching multiple disks based on whether you want to prioritize the performance of read operations or write operations for your database: Oracle ASM on its own is likely to result in better write operation performance, but worse IOPS for read operations as compared to the approach using disk arrays. The following illustration logically depicts this arrangement. IMPORTANT Evaluate the trade-off between write performance and read performance on a case-by-case basis. Your actual results can vary - do proper testing. ASM favors write operations, Operating System disk striping favors read operations. Clustering (RAC ) is not supported Oracle Real Application Clusters (RAC) is designed to mitigate the failure of a single node in an on-premises multinode cluster configuration. It relies on two on-premises technologies which do not work in hyper-scale public cloud environments like Microsoft Azure: network Multi-cast and shared disk. If you want to architect a georedundant multi-node configuration of Oracle DB, you will need to implement data replication with Oracle DataGuard. High availability and disaster recovery considerations When using Oracle Database in Azure virtual machines, you are responsible for implementing a high availability and disaster recovery solution to avoid any downtime. You are also responsible for backing up your own data and application. High availability and disaster recovery for Oracle Database Enterprise Edition (without RAC) on Azure can be achieved using Data Guard, Active Data Guard, or Oracle Golden Gate, with two databases in two separate virtual machines. Both virtual machines should be in the same virtual network to ensure they can access each other over the private persistent IP address. Additionally, we recommend placing the virtual machines in the same availability set to allow Azure to place them into separate fault domains and upgrade domains. Each virtual machine must have at least 2 GB of memory and 5 GB of disk space. With Oracle Data Guard, high availability can be achieved with a primary database in one virtual machine, a secondary (standby) database in another virtual machine, and one-way replication set up between them. The result is read access to the copy of the database. With Oracle GoldenGate, you can configure bi-directional replication between the two databases. To learn how to set up a high-availability solution for your databases using these tools, see Active Data Guard and GoldenGate documentation at the Oracle website. If you need read-write access to the copy of the database, you can use Oracle Active Data Guard. Oracle WebLogic Server virtual machine images Clustering is supported on Enterprise Edition only. You are licensed to use WebLogic clustering only when using the Enterprise Edition of WebLogic Server. Do not use clustering with WebLogic Server Standard Edition. Connection timeouts: If your application relies on connections to public endpoints of another Azure cloud service (for example, a database tier service), Azure might close these open connections after four minutes of inactivity. This might affect features and applications relying on connection pools, because connections that are inactive for more than that limit might no longer remain valid. If this affects your application, consider enabling "keep-alive" logic on your connection pools. If an endpoint is internal to your Azure cloud service deployment (such as a standalone database virtual machine within the same cloud service as your WebLogic virtual machines), then the connection is direct and does not rely on the Azure load balancer, and therefore is not subject to a connection timeout. UDP multicast is not supported. Azure supports UDP unicasting, but not multicasting or broadcasting. WebLogic Server is able to rely on Azure UDP unicast capabilities. For best results relying on UDP unicast, we recommend that the WebLogic cluster size be kept static, or be kept with no more than 10 managed servers included in the cluster. WebLogic Server expects public and private ports to be the same for T3 access (for example, when using Enterprise JavaBeans). Consider a multi-tier scenario where a service layer (EJB) application is running on a WebLogic Server cluster consisting of two or more managed servers, in a cloud service named SLWLS. The client tier is in a different cloud service, running a simple Java program trying to call EJB in the service layer. Because it is necessary to load balance the service layer, a public load-balanced endpoint needs to be created for the Virtual Machines in the WebLogic Server cluster. If the private port that you specify for that endpoint is different from the public port (for example, 7006:7008), an error such as the following occurs: [java] javax.naming.CommunicationException [Root exception is java.net.ConnectException: t3://example.cloudapp.net:7006: Bootstrap to: example.cloudapp.net/138.91.142.178:7006' over: 't3' got an error or timed out] This is because for any remote T3 access, WebLogic Server expects the load balancer port and the WebLogic managed server port to be the same. In the above case, the client is accessing port 7006 (the load balancer port) and the managed server is listening on 7008 (the private port). This restriction is applicable only for T3 access, not HTTP. To avoid this issue, use one of the following workarounds: Use the same private and public port numbers for load balanced endpoints dedicated to T3 access. Include the following JVM parameter when starting WebLogic Server: -Dweblogic.rjvm.enableprotocolswitch=true For related information, see KB article 860340.1 at http://support.oracle.com. Dynamic clustering and load balancing limitations. Suppose you want to use a dynamic cluster in WebLogic Server and expose it through a single, public load-balanced endpoint in Azure. This can be done as long as you use a fixed port number for each of the managed servers (not dynamically assigned from a range) and do not start more managed servers than there are machines the administrator is tracking (that is, no more than one managed server per virtual machine). If your configuration results in more WebLogic servers being started than there are virtual machines (that is, where multiple WebLogic Server instances share the same virtual machine), then it is not possible for more than one of those instances of WebLogic Server servers to bind to a given port number – the others on that virtual machine will fail. On the other hand, if you configure the admin server to automatically assign unique port numbers to its managed servers, then load balancing is not possible because Azure does not support mapping from a single public port to multiple private ports, as would be required for this configuration. Multiple instances of Weblogic Server on a virtual machine. Depending on your deployment’s requirements, you might consider the option of running multiple instances of WebLogic Server on the same virtual machine, if the virtual machine is large enough. For example, on a medium size virtual machine, which contains two cores, you could choose to run two instances of WebLogic Server. Note however that we still recommend that you avoid introducing single points of failure into your architecture, which would be the case if you used just one virtual machine that is running multiple instances of WebLogic Server. Using at least two virtual machines could be a better approach, and each of those virtual machines could then run multiple instances of WebLogic Server. Each of these instances of WebLogic Server could still be part of the same cluster. Note, however, it is currently not possible to use Azure to load-balance endpoints that are exposed by such WebLogic Server deployments within the same virtual machine, because Azure load balancer requires the loadbalanced servers to be distributed among unique virtual machines. Oracle JDK virtual machine images JDK 6 and 7 latest updates. While we recommend using the latest public, supported version of Java (currently Java 8), Azure also makes JDK 6 and 7 images available. This is intended for legacy applications that are not yet ready to be upgraded to JDK 8. While updates to previous JDK images might no longer be available to the general public, given the Microsoft partnership with Oracle, the JDK 6 and 7 images provided by Azure are intended to contain a more recent non-public update that is normally offered by Oracle to only a select group of Oracle’s supported customers. New versions of the JDK images will be made available over time with updated releases of JDK 6 and 7. The JDK available in this JDK 6 and 7 images, and the virtual machines and images derived from them, can only be used within Azure. 64-bit JDK. The Oracle WebLogic Server virtual machine images and the Oracle JDK virtual machine images provided by Azure contain the 64-bit versions of both Windows Server and the JDK. Additional resources Oracle virtual machine images for Azure List of Oracle virtual machine images for Windows 3/24/2017 • 1 min to read • Edit Online To create virtual machines based on Oracle images, sign in to the Azure portal, click Marketplace, click Compute, and then type Oracle into the Search box. Select an image and follow the instructions to set up the image on Microsoft Azure. Note that Oracle images by Microsoft on the Azure portal run on Windows, and Oracle images by Oracle run on Oracle Linux. Windows-based virtual machine images The following is a list of the available Oracle virtual machine images that run on Windows Server on Azure. These images are pay-as-you-go, meaning that Oracle license fees are included in the usage of these images. Microsoft no longer publishes Oracle Database or WebLogic images in Azure Marketplace. You can still create your own custom image and use the Bring Your Own License model in order to run Oracle software on Microsoft Azure. Java virtual machine images JDK 8 on Windows Server 2012 R2 JDK 7 on Windows Server 2012 JDK 6 on Windows Server 2012 Oracle Linux virtual machine images The following is a list of the available preconfigured Oracle virtual machine images that run on Oracle Linux on Azure. You are expected to bring your own license for these images, as Oracle license fees are not included in the usage of these preconfigured virtual machine images. You can also bring your own license to install and run Oracle software on custom virtual machines on Windows or Linux. Here are complete details on Oracle licensing on Azure. And here are details on creating virtual machines using your own images. To learn about this and other methods of migrating Oracle and other workloads to Azure, see Different ways to create a Windows-based virtual machine. Oracle Database 12c Enterprise Edition on Oracle Linux Oracle Database 12c Standard Edition on Oracle Linux Oracle WebLogic Server 12c Enterprise Edition on Oracle Linux Oracle Linux 6.4.0.0.0 Oracle Linux 6.7.0.0.0 Oracle Linux 7.0.0.0.0 Oracle Linux 7.2.0.0.0 Additional resources Oracle virtual machine images - miscellaneous considerations Using SAP on Windows virtual machines in Azure 3/27/2017 • 4 min to read • Edit Online Cloud Computing is a widely used term which is gaining more and more importance within the IT industry, from small companies up to large and multinational corporations. Microsoft Azure is the Cloud Services Platform from Microsoft which offers a wide spectrum of new possibilities. Now customers are able to rapidly provision and deprovision applications as Cloud-Services, so they are not limited to technical or budgeting restrictions. Instead of investing time and budget into hardware infrastructure, companies can focus on the application, business processes and its benefits for customers and users. With Microsoft Azure virtual machines, Microsoft offers a comprehensive Infrastructure as a Service (IaaS) platform. SAP NetWeaver based applications are supported on Azure Virtual Machines (IaaS). The whitepapers below describe how to plan and implement SAP NetWeaver based applications on Windows virtual machines in Azure. You can also implement SAP NetWeaver based applications on Linux virtual machines. Planning and Implementation Title: SAP NetWeaver on Azure Virtual Machines – Planning and Implementation Guide Summary: This is the paper to start with if you are thinking about running SAP NetWeaver in Azure Virtual Machines. This planning and implementation guide will help you evaluate whether an existing or planned SAP NetWeaver-based system can be deployed to an Azure Virtual Machines environment. It covers multiple SAP NetWeaver deployment scenarios, and includes SAP configurations that are specific to Azure. The paper lists and describes all the necessary configuration information you’ll need on the SAP/Azure side to run a hybrid SAP landscape. Measures you can take to ensure high availability of SAP NetWeaver-based systems on IaaS are also covered. Updated: August 2015 Download this guide now Deployment Title: SAP NetWeaver on Azure Virtual Machines – Deployment Guide Summary: This document provides procedural guidance for deploying SAP NetWeaver software to virtual machines in Azure. This paper focuses on three specific deployment scenarios, with an emphasis on enabling the Azure Monitoring Extensions for SAP, including troubleshooting recommendations for the Azure Monitoring Extensions for SAP. This paper assumes that you’ve read the planning and implementation guide. Updated: September 2015 Download this guide now SAP DBMS on Azure Title: SAP DBMS in Azure Deployment Guide Summary: This paper covers planning and implementation considerations for the DBMS systems that should run in conjunction with SAP. In the first part, general considerations are listed and presented. The following parts of the paper relate to deployments of different DBMS in Azure that are supported by SAP. Different DBMS presented are SQL Server, SAP ASE, Oracle, SAP MaxDB and IBM DB2 for Linux, Unix and Windows. In those specific parts considerations you have to account for when you are running SAP systems on Azure in conjunction with those DBMS are discussed. Subjects like backup and high availability methods that are supported by the different DBMS on Azure are presented for the usage with SAP applications. Updated: December 2015 Download this guide now SAP NetWeaver on Azure Title: SAP NetWeaver - Building an Azure based Disaster Recovery Solution Summary:This document provides a step-by-step guidance for building an Azure based Disaster Recovery solution for SAP NetWeaver. The solution described assumes that the SAP landscape is running virtualized on-premise based on Hyper-V. In the first part of the document Azure Site Recovery (ASR) services are introduced in its components. The second part of the document describes specifics for SAP NetWeaver based landscapes. Possibilities of using ASR with SAP NetWeaver application instances and SAP Central Services are discussed and described. A focus of the second part is leveraging ASR for SAP Central Services which are protected with windows Server Failover Cluster configurations. Updated: September 2015 Download this guide now SAP NetWeaver on Azure - HA Title: SAP NetWeaver on Azure - Clustering SAP ASCS/SCS Instances using Windows Server Failover Cluster on Azure with SIOS DataKeeper Summary: 'This document describes how to use SIOS DataKeeper to set up a highly available SAP ASCS/SCS configuration on Azure. SAP protects their single point of failure components like SAP ASCS/SCS or Enqueue Replication Services with Windows Server Failover Cluster configurations that require shared disks. These SAP components are essential for the functionality of a SAP system. Therefore high-availability functionality needs to be put in place to make sure that those components can sustain a failure of a server or a VM as done with Windows Cluster configurations for bare-metal and Hyper-V environments. As of August 2015 Azure on itself cannot provide shared disks that would be required for the Windows based highly available configurations required for these critical SAP components. However with the help of the product DataKeeper by SIOS, Windows Server Failover Cluster configurations as needed for SAP ASCS/SCS can be built on the Azure IaaS platform. This paper describes in a step-to-step approach how to install a Windows Server Failover Cluster configuration with shared disk provided by SIOS Datakeeper in Azure. The paper will explain details in configurations on the Azure, Windows and SAP side which make the high availability configuration work in an optimal manner. The paper complements the SAP Installation Documentation and SAP Notes which represent the primary resources for installations and deployments of SAP software on given platforms. Updated: August 2015 Download this guide now Overview of SQL Server on Azure Virtual Machines 4/21/2017 • 6 min to read • Edit Online This topic describes your options for running SQL Server on Azure virtual machines (VMs), along with links to portal images and an overview of common tasks. NOTE If you're already familiar with SQL Server and just want to see how to deploy a SQL Server VM, see Provision a SQL Server virtual machine in the Azure portal. Overview If you are a database administrator or a developer, Azure VMs provide a way to move your on-premises SQL Server workloads and applications to the Cloud. The following video provides a technical overview of SQL Server Azure VMs. The video covers the following areas: TIME AREA 00:21 What are Azure VMs? 01:45 Security 02:50 Connectivity 03:30 Storage reliability and performance 05:20 VM sizes 05:54 High availability and SLA 07:30 Configuration support 08:00 Monitoring 08:32 Demo: Create a SQL Server 2016 VM NOTE The video focuses on SQL Server 2016, but Azure provides VM images for many versions of SQL Server, including 2008, 2012, 2014, and 2016. Scenarios There are many reasons that you might choose to host your data in Azure. If your application is moving to Azure, it improves performance to also move the data. But there are other benefits. You automatically have access to multiple data centers for a global presence and disaster recovery. The data is also highly secured and durable. SQL Server running on Azure VMs is one option for storing your relational data in Azure. It is good choice for several scenarios. For example, you might want to configure the Azure VM as similarly as possible to an onpremises SQL Server machine. Or you might want to run additional applications and services on the same database server. There are two main resources that can help you think through even more scenarios and considerations: SQL Server on Azure virtual machines provides an overview of the best scenarios for using SQL Server on Azure VMs. Choose a cloud SQL Server option: Azure SQL (PaaS) Database or SQL Server on Azure VMs (IaaS) provides a detailed comparison between SQL Database and SQL Server running on a VM. Create a new SQL VM The following sections provide direct links to the Azure portal for the SQL Server virtual machine gallery images. Depending on the image you select, you can either pay for SQL Server licensing costs on a per-minute basis, or you can bring your own license (BYOL). Find step-by-step guidance for creating a new SQL VM in the tutorial, Provision a SQL Server virtual machine in the Azure portal. Also, review the Performance best practices for SQL Server VMs, which explains how to select the appropriate machine size and other features available during provisioning. Option 1: Create a SQL VM with per-minute licensing The following table provides a matrix of the latest SQL Server images in the virtual machine gallery. Click on any link to begin creating a new SQL VM with your specified version, edition, and operating system. TIP To understand the VM and SQL pricing for these images, see Pricing guidance for SQL Server Azure VMs. VERSION OPERATING SYSTEM EDITION SQL Server 2016 SP1 Windows Server 2016 Enterprise, Standard, Web, Express, Developer SQL Server 2014 SP2 Windows Server 2012 R2 Enterprise, Standard, Web, Express SQL Server 2012 SP3 Windows Server 2012 R2 Enterprise, Standard, Web, Express SQL Server 2008 R2 SP3 Windows Server 2008 R2 Enterprise, Standard, Web In addition to this list, other combinations of SQL Server versions and operating systems are available. Find other images through a marketplace search in the Azure portal. Option 2: Create a SQL VM with an existing license You can also bring your own license (BYOL). In this scenario, you only pay for the VM without any additional charges for SQL Server licensing. To use your own license, use the matrix of SQL Server versions, editions, and operating systems below. In the portal, these image names are prefixed with {BYOL}. TIP Bringing your own license can save you money over time for continuous production workloads. For more information, see Pricing guidance for SQL Server Azure VMs. VERSION OPERATING SYSTEM EDITION SQL Server 2016 SP1 Windows Server 2016 Enterprise BYOL, Standard BYOL SQL Server 2014 SP2 Windows Server 2012 R2 Enterprise BYOL, Standard BYOL SQL Server 2012 SP2 Windows Server 2012 R2 Enterprise BYOL, Standard BYOL In addition to this list, other combinations of SQL Server versions and operating systems are available. Find other images through a marketplace search in the Azure portal (search for "{BYOL} SQL Server"). IMPORTANT To use BYOL VM images, you must have an Enterprise Agreement with License Mobility through Software Assurance on Azure. You also need a valid license for the version/edition of SQL Server you want to use. You must provide the necessary BYOL information to Microsoft within 10 days of provisioning your VM. NOTE It is not possible to change the licensing model of a pay-per-minute SQL Server VM to use your own license. In this case, you must create a new BYOL VM and migrate your databases to the new VM. Manage your SQL VM After provisioning your SQL Server VM, there are several optional management tasks. In many aspects, you configure and manage SQL Server exactly like you would manage an on-premises SQL Server instance. However, some tasks are specific to Azure. The following sections highlight some of these areas with links to more information. Connect to the VM One of the most basic management steps is to connect to your SQL Server VM through tools, such as SQL Server Management Studio (SSMS). For instructions on how to connect to your new SQL Server VM, see Connect to a SQL Server Virtual Machine on Azure. Migrate your data If you have an existing database, you'll want to move that to the newly provisioned SQL VM. For a list of migration options and guidance, see Migrating a Database to SQL Server on an Azure VM. Configure high availability If you require high availability, consider configuring SQL Server Availability Groups. This involves multiple Azure VMs in a virtual network. The Azure portal has a template that sets up this configuration for you. For more information, see Configure an AlwaysOn availability group in Azure Resource Manager virtual machines. If you want to manually configure your Availability Group and associated listener, see Configure AlwaysOn Availability Groups in Azure VM. For other high availability considerations, see High Availability and Disaster Recovery for SQL Server in Azure Virtual Machines. Back up your data Azure VMs can take advantage of Automated Backup, which regularly creates backups of your database to blob storage. You can also manually use this technique. For more information, see Use Azure Storage for SQL Server Backup and Restore. For an overview of all backup and restore options, see Backup and Restore for SQL Server in Azure Virtual Machines. Automate updates Azure VMs can use Automated Patching to schedule a maintenance window for installing important windows and SQL Server updates automatically. Customer experience improvement program (CEIP) The Customer Experience Improvement Program (CEIP) is enabled by default. This periodically sends reports to Microsoft to help improve SQL Server. There is no management task required with CEIP unless you want to disable it after provisioning. You can customize or disable the CEIP by connecting to the VM with remote desktop. Then run the SQL Server Error and Usage Reporting utility. Follow the instructions to disable reporting. For more information, see the CEIP section of the Accept License Terms topic. Next steps Explore the Learning Path for SQL Server on Azure virtual machines. For questions about pricing, see Pricing guidance for SQL Server Azure VMs and the Azure pricing page. Select your target edition of SQL Server in the OS/Software list. Then view the prices for differently sized virtual machines. More question? First, see the SQL Server on Azure Virtual Machines FAQ. But also add your questions or comments to the bottom of any SQL VM topics to interact with Microsoft and the community. How to run a Java application server on a virtual machine created with the classic deployment model 3/24/2017 • 6 min to read • Edit Online IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. For a Resource Manager template to deploy a webapp with Java 8 and Tomcat, see here. With Azure, you can use a virtual machine to provide server capabilities. As an example, a virtual machine running on Azure can be configured to host a Java application server, such as Apache Tomcat. After completing this guide, you will have an understanding of how to create a virtual machine running on Azure and configure it to run a Java application server. You will learn and perform the following tasks: How to create a virtual machine that has a Java Development Kit (JDK) already installed. How to remotely sign in to your virtual machine. How to install a Java application server--Apache Tomcat--on your virtual machine. How to create an endpoint for your virtual machine. How to open a port in the firewall for your application server. The completed installation results in Tomcat running on a virtual machine. NOTE To complete this tutorial, you need an Azure account. You can activate your MSDN subscriber benefits or sign up for a free trial. To create a virtual machine 1. Sign in to the Azure portal. 2. Click New, click Compute, then click See all in the Featured apps. 3. Click JDK, click JDK 8 in the JDK pane. Virtual machine images that support JDK 6 and JDK 7 are available if you have legacy applications that are not ready to run in JDK 8. 4. In the JDK 8 pane, select Classic, then click Create. 5. In the Basics blade: a. Specify a name for the virtual machine. b. Enter a name for the administrator in the User Name field. Remember this name and the associated password that follows in the next field. You need them when you remotely sign in to the virtual machine. c. Enter a password in the New password field, and reenter it in the Confirm password field. This password is for the Administrator account. d. Select the appropriate Subscription. e. For the Resource group, click Create new and enter the name of a new resource group. Or, click Use existing and select one of the available resource groups. f. Select a location where the virtual machine resides, such as South Central US. 6. Click Next. 7. In the Virtual machine image size blade, select A1 Standard or another appropriate image. 8. Click Select. 9. In the Settings blade, click OK. You can use the default values provided by Azure. 10. In the Summary blade, click OK. To remotely sign in to your virtual machine 1. Log on to the Azure portal. 2. Click Virtual machines (classic). If needed, click More services at the bottom left corner under the service categories. The Virtual machines (classic) entry is listed in the Compute group. 3. Click the name of the virtual machine that you want to sign in to. 4. After the virtual machine has started, a menu at the top of the pane allows connections. 5. Click Connect. 6. Respond to the prompts as needed to connect to the virtual machine. Typically, you save or open the .rdp file that contains the connection details. You might have to copy the url:port as the last part of the first line of the .rdp file and paste it in a remote sign-in application. To install a Java application server on your virtual machine You can copy a Java application server to your virtual machine, or you can install a Java application server through an installer. This tutorial uses Tomcat as the Java application server to install. 1. When you are signed in to your virtual machine, open a browser session to Apache Tomcat. 2. Double-click the link for 32-bit/64-bit Windows Service Installer. By using this technique, Tomcat installs as a Windows service. 3. When prompted, choose to run the installer. 4. Within the Apache Tomcat Setup wizard, follow the prompts to install Tomcat. For the purposes of this tutorial, accepting the defaults is fine. When you reach the Completing the Apache Tomcat Setup Wizard dialog box, you can optionally check Run Apache Tomcat to have Tomcat start now. Click Finish to complete the Tomcat setup process. To start Tomcat You can manually start Tomcat by opening a command prompt on your virtual machine and running the command net start Tomcat8. Once Tomcat is running, you can access Tomcat by entering the URL http://localhost:8080 in the virtual machine's browser. To see Tomcat running from external machines, you need to create an endpoint and open a port. To create an endpoint for your virtual machine 1. 2. 3. 4. 5. 6. Sign in to the Azure portal. Click Virtual machines (classic). Click the name of the virtual machine that is running your Java application server. Click Endpoints. Click Add. In the Add endpoint dialog box: a. Specify a name for the endpoint; for example, HttpIn. b. Select TCP for the protocol. c. Specify 80 for the public port. d. Specify 8080 for the private port. e. Select Disabled for the floating IP address. f. Leave the access control list as is. g. Click the OK button to close the dialog box and create the endpoint. To open a port in the firewall for your virtual machine 1. 2. 3. 4. 5. Sign in to your virtual machine. Click Windows Start. Click Control Panel. Click System and Security, click Windows Firewall, and then click Advanced Settings. Click Inbound Rules, and then click New Rule. 6. For the Rule Type, select Port, and then click Next. 7. On the Protocol and Ports screen, select TCP, specify 8080 as the Specific local port, and then click Next. 8. On the Action screen, select Allow the connection, and then click Next. 9. On the Profile screen, ensure that Domain, Private, and Public are selected, and then click Next. 10. On the Name screen, specify a name for the rule, such as HttpIn (the rule name is not required to match the endpoint name, however), and then click Finish. At this point, your Tomcat website should be viewable from an external browser. In the browser's address window, type a URL of the form http://your_DNS_name.cloudapp.net, where your_DNS_name is the DNS name you specified when you created the virtual machine. Application lifecycle considerations You could create your own web application archive (WAR) and add it to the webapps folder. For example, create a basic Java Service Page (JSP) dynamic web project and export it as a WAR file. Next, copy the WAR to the Apache Tomcat webapps folder on the virtual machine, then run it in a browser. By default when the Tomcat service is installed, it is set to start manually. You can switch it to start automatically by using the Services snap-in. Start the Services snap-in by clicking Windows Start, Administrative Tools, and then Services. Double-click the Apache Tomcat service and set Startup type to Automatic. The benefit of having Tomcat start automatically is that it starts running when the virtual machine is rebooted (for example, after software updates that require a reboot are installed). Next steps You can learn about other services (such as Azure Storage, service bus, and SQL Database) that you may want to include with your Java applications. View the information available at the Java Developer Center. 1 min to read • Edit O nline 1 min to read • Edit O nline 1 min to read • Edit O nline Troubleshoot classic deployment issues with creating a new Windows virtual machine in Azure 3/27/2017 • 5 min to read • Edit Online When you try to create a new Azure Virtual Machine (VM), the common errors you encounter are provisioning failures or allocation failures. A provisioning failure happens when the OS image fails to load either due to incorrect preparatory steps or because of selecting the wrong settings during the image capture from the portal. An allocation failure results when the cluster or region either does not have resources available or cannot support the requested VM size. IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. For the Resource Manager version of this article, see here. If your Azure issue is not addressed in this article, visit the Azure forums on MSDN and the Stack Overflow. You can post your issue on these forums or to @AzureSupport on Twitter. Also, you can file an Azure support request by selecting Get support on the Azure support site. Collect audit logs To start troubleshooting, collect the audit logs to identify the error associated with the issue. In the Azure portal, click Browse > Virtual machines > your Windows virtual machine > Settings > Audit logs. Issue: Custom image; provisioning errors Provisioning errors arise if you upload or capture a generalized VM image as a specialized VM image or vice versa. The former will cause a provisioning timeout error and the latter will cause a provisioning failure. To deploy your custom image without errors, you must ensure that the type of the image does not change during the capture process. The following table lists the possible combinations of generalized and specialized images, the error type you will encounter and what you need to do to fix the errors. The following table lists the possible upload and capture combinations of Windows generalized (gen.) and specialized (spec.) OS images. The combinations that will process without any errors are indicated by a Y, and those that will throw errors are indicated by an N. The causes and resolutions for the different errors you will run into are given below the table. OS UPLOAD SPEC. UPLOAD GEN. CAPTURE SPEC. CAPTURE GEN. Windows gen. N1 Y N3 Y Windows spec. Y N2 Y N4 Y: If the OS is Windows generalized, and it is uploaded and/or captured with the generalized setting, then there won’t be any errors. Similarly, if the OS is Windows specialized, and it is uploaded and/or captured with the specialized setting, then there won’t be any errors. Upload Errors: N 1: If the OS is Windows generalized, and it is uploaded as specialized, you will get a provisioning timeout error with the VM stuck at the OOBE screen. N 2: If the OS is Windows specialized, and it is uploaded as generalized, you will get a provisioning failure error with the VM stuck at the OOBE screen because the new VM is running with the original computer name, username and password. Resolution: To resolve both these errors, upload the original VHD, available on-prem, with the same setting as that for the OS (generalized/specialized). To upload as generalized, remember to run sysprep first. See Create and upload a Windows Server VHD to Azure for more information. Capture Errors: N 3: If the OS is Windows generalized, and it is captured as specialized, you will get a provisioning timeout error because the original VM is not usable as it is marked as generalized. N 4: If the OS is Windows specialized, and it is captured as generalized, you will get a provisioning failure error because the new VM is running with the original computer name, username and password. Also, the original VM is not usable because it is marked as specialized. Resolution: To resolve both these errors, delete the current image from the portal, and recapture it from the current VHDs with the same setting as that for the OS (generalized/specialized). Issue: Custom/ gallery/ marketplace image; allocation failure This error arises in situations when the new VM request is sent to a cluster that either does not have available free space to accommodate the request, or cannot support the VM size being requested. It is not possible to mix different series of VMs in the same cloud service. So if you want to create a new VM of a different size than what your cloud service can support, the compute request will fail. Depending on the constraints of the cloud service you use to create the new VM, you might encounter an error caused by one of two situations. Cause 1: The cloud service is pinned to a specific cluster, or it is linked to an affinity group, and hence pinned to a specific cluster by design. So new compute resource requests in that affinity group are tried in the same cluster where the existing resources are hosted. However, the same cluster may either not support the requested VM size or have insufficient available space, resulting in an allocation error. This is true whether the new resources are created through a new cloud service or through an existing cloud service. Resolution 1: Create a new cloud service and associate it with either a region or a region-based virtual network. Create a new VM in the new cloud service. If you get an error when trying to create a new cloud service, either retry at a later time or change the region for the cloud service. IMPORTANT If you were trying to create a new VM in an existing cloud service but couldn’t, and had to create a new cloud service for your new VM, you can choose to consolidate all your VMs in the same cloud service. To do so, delete the VMs in the existing cloud service, and recapture them from their disks in the new cloud service. However, it is important to remember that the new cloud service will have a new name and VIP, so you will need to update these for all the dependencies that currently use this information for the existing cloud service. Cause 2: The cloud service is associated with a virtual network that is linked to an affinity group, so it is pinned to a specific cluster by design. All new compute resource requests in that affinity group are therefore tried in the same cluster where the existing resources are hosted. However, the same cluster may either not support the requested VM size or have insufficient available space, resulting in an allocation error. This is true whether the new resources are created through a new cloud service or through an existing cloud service. Resolution 2: Create a new regional virtual network. Create the new VM in the new virtual network. Connect your existing virtual network to the new virtual network. See more about regional virtual networks. Alternatively, you can migrate your affinity-group-based virtual network to a regional virtual network, and then create the new VM. Next steps If you encounter issues when you start a stopped Windows VM or resize an existing Windows VM in Azure, see Troubleshoot classic deployment issues with restarting or resizing an existing Windows Virtual Machine in Azure. Troubleshoot classic deployment issues with restarting or resizing an existing Windows Virtual Machine in Azure 3/30/2017 • 2 min to read • Edit Online When you try to start a stopped Azure Virtual Machine (VM), or resize an existing Azure VM, the common error you encounter is an allocation failure. This error results when the cluster or region either does not have resources available or cannot support the requested VM size. IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and classic. This article covers using the classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. If your Azure issue is not addressed in this article, visit the Azure forums on MSDN and the Stack Overflow. You can post your issue on these forums or to @AzureSupport on Twitter. Also, you can file an Azure support request by selecting Get support on the Azure support site. Collect audit logs To start troubleshooting, collect the audit logs to identify the error associated with the issue. In the Azure portal, click Browse > Virtual machines > your Windows virtual machine > Settings > Audit logs. Issue: Error when starting a stopped VM You try to start a stopped VM but get an allocation failure. Cause The request to start the stopped VM has to be attempted at the original cluster that hosts the cloud service. However, the cluster does not have free space available to fulfill the request. Resolution Create a new cloud service and associate it with either a region or a region-based virtual network, but not an affinity group. Delete the stopped VM. Recreate the VM in the new cloud service by using the disks. Start the re-created VM. If you get an error when trying to create a new cloud service, either retry at a later time or change the region for the cloud service. IMPORTANT The new cloud service will have a new name and VIP, so you will need to change that information for all the dependencies that use that information for the existing cloud service. Issue: Error when resizing an existing VM You try to resize an existing VM but get an allocation failure. Cause The request to resize the VM has to be attempted at the original cluster that hosts the cloud service. However, the cluster does not support the requested VM size. Resolution Reduce the requested VM size, and retry the resize request. Click Browse all > Virtual machines (classic) > your virtual machine > Settings > Size. For detailed steps, see Resize the virtual machine. If it is not possible to reduce the VM size, follow these steps: Create a new cloud service, ensuring it is not linked to an affinity group and not associated with a virtual network that is linked to an affinity group. Create a new, larger-sized VM in it. You can consolidate all your VMs in the same cloud service. If your existing cloud service is associated with a region-based virtual network, you can connect the new cloud service to the existing virtual network. If the existing cloud service is not associated with a region-based virtual network, then you have to delete the VMs in the existing cloud service, and recreate them in the new cloud service from their disks. However, it is important to remember that the new cloud service will have a new name and VIP, so you will need to update these for all the dependencies that currently use this information for the existing cloud service. Next steps If you encounter issues when you create a new Windows VM in Azure, see Troubleshoot deployment issues with creating a new Windows virtual machine in Azure. How to reset the Remote Desktop service or its login password in a Windows VM created using the Classic deployment model 3/30/2017 • 3 min to read • Edit Online IMPORTANT Azure has two different deployment models for creating and working with resources: Resource Manager and classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments use the Resource Manager model. You can also perform these steps for VMs created with the Resource Manager deployment model. If you can't connect to a Windows virtual machine (VM), you can reset the local administrator password or reset the Remote Desktop service configuration. You can use either the Azure portal or the VM Access extension in Azure PowerShell to reset the password. Ways to reset configuration or credentials You can reset Remote Desktop services and credentials in a few different ways, depending on your needs: Reset using the Azure portal Reset using Azure PowerShell Azure portal You can use the Azure portal to reset the Remote Desktop service. To expand the portal menu, click the three bars in the upper left corner and then click Virtual machines (classic): Select your Windows virtual machine and then click Reset Remote.... The following dialog appears to reset the Remote Desktop configuration: You can also reset the username and password of the local administrator account. From your VM, click Support + Troubleshooting > Reset password. The password reset blade is displayed: After you enter the new user name and password, click Save. VMAccess extension and PowerShell Make sure the VM Agent is installed on the virtual machine. The VMAccess extension doesn't need to be installed before you can use it, as long as the VM Agent is available. Verify that the VM Agent is already installed by using the following command. (Replace "myCloudService" and "myVM" by the names of your cloud service and your VM, respectively. You can learn these names by running Get-AzureVM without any parameters.) $vm = Get-AzureVM -ServiceName "myCloudService" -Name "myVM" write-host $vm.VM.ProvisionGuestAgent If the write-host command displays True, the VM Agent is installed. If it displays False, see the instructions and a link to the download in the VM Agent and Extensions - Part 2 Azure blog post. If you created the virtual machine by using the portal, check whether returns True. If not, you can set it by using this command: $vm.GetInstance().ProvisionGuestAgent $vm.GetInstance().ProvisionGuestAgent = $true This command prevents the following error when you're running the Set-AzureVMExtension command in the next steps: “Provision Guest Agent must be enabled on the VM object before setting IaaS VM Access Extension.” Reset the local administrator account password Create a sign-in credential with the current local administrator account name and a new password, and then run the Set-AzureVMAccessExtension as follows. $cred=Get-Credential Set-AzureVMAccessExtension –vm $vm -UserName $cred.GetNetworkCredential().Username ` -Password $cred.GetNetworkCredential().Password | Update-AzureVM If you type a different name than the current account, the VMAccess extension renames the local administrator account, assigns the password to that account, and issues a Remote Desktop sign-out. If the local administrator account is disabled, the VMAccess extension enables it. These commands also reset the Remote Desktop service configuration. Reset the Remote Desktop service configuration To reset the Remote Desktop service configuration, run the following command: Set-AzureVMAccessExtension –vm $vm | Update-AzureVM The VMAccess extension runs two commands on the virtual machine: netsh advfirewall firewall set rule group="Remote Desktop" new enable=Yes This command enables the built-in Windows Firewall group that allows incoming Remote Desktop traffic, which uses TCP port 3389. Set-ItemProperty -Path 'HKLM:\System\CurrentControlSet\Control\Terminal Server' -name "fDenyTSConnections" Value 0 This command sets the fDenyTSConnections registry value to 0, enabling Remote Desktop connections. Next steps If the Azure VM access extension does not respond and you are unable to reset the password, you can reset the local Windows password offline. This method is a more advanced process and requires you to connect the virtual hard disk of the problematic VM to another VM. Follow the steps documented in this article first, and only attempt the offline password reset method as a last resort. Azure VM extensions and features Connect to an Azure virtual machine with RDP or SSH Troubleshoot Remote Desktop connections to a Windows-based Azure virtual machine 1 min to read • Edit O nline