Azure Networking Overview

VMFocus Wide Featured Image

The purpose of this post is to explain my understanding of the different networking options with Azure, it is meant to be an overview and not a deep dive into each area.  If you notice any areas which are incorrect, feel free to make a comment and I will update this post.

Endpoints

Endpoints are the most basic configuration offering when it comes to Azure networking.  Each virtual machine is externally accessible over the internet using RDP and Remote PowerShell. Port forwarding is used to access the VM.  For example 12.3.4.1:6510 resolves to azure.vmfocus.com which is then port forwarded to an internal VM on 10.0.0.1:3389 Azure Input Endpoints

  • Public IP Address (VIP) is mapped to the Cloud Service Name e.g. azure.vmfocus.com
  • The port forward can be changed if required and additional services can be opened or the defaults of RDP and Remote PowerShell can be closed
  • It is important to note that the public IP is completely open and the only security offered is password authentication into the virtual machine
  • Each virtual machine has to have an exclusive port mapped see diagram below

Azure Input Endpoints Multiple VM

Endpoint Access Control Lists

To provide some mitigation to having virtual machines completely exposed to the internet, you can define an basic access control list (ACL).  The ACL is based on source public IP Address with a permit or deny to a virtual machine.

  • Maximum of 50 rules per virtual machine
  • Processing order is from top down
  • Suggested configuration would be to white list on-premises external public IP address

Load Balancing

Multiple virtual machines are given the same public port for example 80.  Azure load balancing then distributes traffic using round robin.

  • Health probes can be used every 15 seconds on a private internal port to ensure the service is running.
  • The health probe uses TCP ACK for TCP queries
  • The health probe can use HTTP 200 responses for UDP queries
  • If either probe fails twice the traffic to the virtual machine stops.  However the probe continues to ‘beacon’ the virtual machine and once a response is received it is re-entered into round robin load balancing

Azure Load Balancing

Virtual Networks

Virtual networks enable you to create secure isolated networks within Azure to maintain persistent IP addresses.  Used for virtual machines which require static IP Addresses.

  • Enables you to extend your trust boundary to federate services whether this is Active Directory Replication using AD Connect or Hybrid Cloud connections
  • Can perform internal load balancing using internal virtual networks using the same principle as load balancing endpoints.

Hybrid Options

This is probably the most interesting part for me, as this provides the connectivity from your on-premises infrastructure to Azure.

Point to Site Point to site uses certificate based authentication to create a VPN tunnel from a client machine to Azure.

  • Maximum of 128 client machines per Azure Gateway
  • Maximum bandwidth of 80 Mbps
  • Data is sent over an encrypted tunnel via certificate authentication on each individual client machine
  • No performance commitment from Microsoft (makes sense as they don’t control the internet)
  • Once created certificates could be deployed to domain joined client devices using group policy
  • Machine authentication not user authentication

Azure Point to Site Site to Site Site to site sends data over an encrypted IPSec tunnel.

  • Requires public IP Address as the source tunnel endpoint and a physical or virtual device that supports IPSec with the following:
    • IKE v1 v2
    • AES 128 256
    • SHA1 SHA2
  • Microsoft keep a known compatible device list located here
  • Requires manual addition of new virtual networks and on-premises networks
  • Again no performance commitment from Microsoft
  • Maximum bandwidth of 80 Mpbs
  • The gateway roles in Azure have two instances active/passive for redundancy and an SLA of 99.9%
  • Can use RRAS if you feel that way inclined to create the IPSec tunnel
  • Certain devices have automatic configuration scripts generated in Azure based

Azure Site to Site Express Route A dedicated route is created either via an exchange provider or a network service provider using a private dedicated network.

  • Bandwidth options range from 10 Mbps to 10 Gbps
  • Committed bandwidth and SLA of 99.99%
  • Predictable network performance
  • BGP is the routing protocol used with ‘private peering’
  • Not limited to VM traffic also Azure Public Services can be sent across Express Route
  • Exchange Providers
    • Provide datacenters in which they connect your rack to Azure
    • Provide unlimited inbound data transfer as part of the exchange provider package
    • Outbound data transfer is included in the monthly exchange provider package but will be limited
  • Network Service Provider
    • Customers who use MPLS providers such as BT & AT&T can add Azure as another ‘site’ on their MPLS circuit
    • Unlimited data transfer in and out of Azure

Azure Express Route

Traffic Manager

Traffic Manager is a DNS based load balancer that offer three load balancing algorithms

  • Performance
    • Traffic Manager makes the decision on the best route for the client to the service it is trying to access based on hops and latency
  • Round Robin
    • Alternates between a number of different locations
  • Failover
    • Traffic always hits your chosen datacentre unless there is a failover scenario

Traffic Manager relies on mapping your DNS domain to x.trafficmanager.net with a CNAME e.g. vmfocus.com to vmfocustm.trafficmanager.net. Then Cloud Service URL’s are mapped to global datacentres to the Traffic Manager Profile e.g. east.vmfocus.com west.vmfocus.com north.vmfocus.com Azure Traffic Manager

HP ConvergedSystem 200-HC StoreVirtual System – Questions Answered

VMFocus Wide Featured Image

Background

HP released two offerings of the HP ConvergedSystem 200-HC StoreVirtual System last year.  Essentially they have taken ESXi, HP StoreVirtual VSA, OneView for vCenter and automated the setup process using OneView Instant On.

HP Converged System 200-HC Diagrams v0.1

Two models are available which are:

  • HP CS 240-HC StoreVirtual System, this has 4 nodes each with:
    • 2 x Intel E5-2640v2 2.2GHz 8 Core Processor
    • 128GB RAM
    • 2GB Flash Backed HP Smart Array P430 Controller
    • 2 x 10GbE Network Connectivity
    • 1 x iLO4 Management
    • 6 x SAS 1.2TB 10K SFF Hard Drives
    • Around 11TB of usable capacity
  • HP CS 242-HC StoreVirtual System, this has 4 nodes each with:
    • 2 x Intel E5-2648v2 2.2GHz 10 Core Processor
    • 256GB RAM
    • 2GB Flash Backed HP Smart Array P430 Controller
    • 2 x 10GbE Network Connectivity
    • 1 x iLO4 Management
    • 4 x SAS 1.2TB 10K SFF Hard Drives
    • 2 x 400GB Mainstream Endurance SSD
    • Around 7.5TB of usable capacity

These are marketed with the ability to provision virtual machines within 30 minutes.

What Does Provision Virtual Machines Within 30 Minutes Really Mean?

To answer this question you need to understand what HP have saved you from doing, which is:

  • Installing ESXi across 4 x Hosts
  • Installing vCenter to a basic configuration
  • Installing HP StoreVitrual VSAE to a basic configuraiton across 4 x Hosts
  • Pre-installed Management VM running Windows Server 2012 Standard that has OneView for vCenter and CMC for StoreVirtual Management

So after completing the initial setup, you do have the ability to upload an ISO and start deploying an OS image.

What About The Stuff Which Marketing Don’t Mention? AKA Questions Answered?

Database

  • SQL Express is used as the database (local instance on Management VM).  I have real concerns around the database if logging levels are increased to troubleshoot issues and/or the customer doesn’t perform an kind of database maintenance
    • I’m waiting on confirmation from HP as whether you can migrate the SQL database instance to a full blown version

Host Profiles

  • Grey area, these can be used.  However HP rather you stay with the base configuration of the nodes (much like the networking see below).

Licences

  • The solution is only supported using Enterprise or Enterprise Plus VMware licenses, with the preference being HP OEM.
  • Windows Server 2012 Standard is supplied as the Management VM.  Initially, this runs from a local partition and is then Storage vMotioned onto the HP Converged Cluster.  Windows licensing dictates that when a OS is moved across hosts using Standard Edition you cannot move the OS back for 90 days or you need to license each node for the potential number of VM’s that could be ran.
    • HP have confirmed that you receive 2 x Windows Server 2012 Standard licenses and DRS Groups Manager rules are configured to only allow the Management VM to migrate between these two ESXi Hosts.

Management Server

  • You are able to upgrade the Management Server VM in terms of RAM, CPU and Disk Space and be supported.
  • You cannot add additional components to the Management Server VM and be supported e.g. VUM, vCenter SysLog Service
    • I’m waiting on confirmation from HP around what is and isn’t supported, I would air on caution and not install anything extra

Networking

  • The 1GbE connections are not used apart from the initial configuration of the Management Server.  My understanding is that these are not supported for any other use.
  • HP prefer you to stay with the standard network configuration, this causes me concern.  10GbE Network providing Management, iSCSI, Virtual Machine and vMotion traffic.  How do you control vMotion bandwidth usage on a Standard vSwitch? You can’t a Distributed vSwitch is a much better option, but if you need to reconfigure a node, you will need to perform a vSS to vDS migration

Updates

  • You can upgrade individual components separately, however you must stay within the HP Storage SPOCK for the Converged System 200-HC StoreVirtual (Note a HP Passport login is required)

Versions

At the time of this post, the latest supported versions are as follows:

  • vSphere 5.5 U2 , no vSphere 6
  • vCenter 5.5 U2
  • HP StoreVirtual VSA 11.5 or 12.0
  • HP OneView for vCenter Storage/Server Modules 7.4.2 or 7.4.4
  • HP OneView Instant On 1.0 or 1.0.1
  • PowerCLI 5.8 R1

Final Thoughts

HP have put together a slick product which automates the initial installation of ESXi and gives you a basic configuration of vCenter.  What it doesn’t give you is  design to say that your workloads are going to be suitable on the environment and or a solution that meets a client requirements.

HP 3PAR Streaming Remote Copy Replication

VMFocus Wide Featured Image

The replication in 3PAR Arrays has always been mediocre.  In the older versions of 3PAR Inform OS if you choose ‘sync replication’ for a single remote copy group, you could not use ‘a sync’ for any other remote copy groups.

This feature was addressed in a newer version of 3PAR Inform OS, however your lowest RPO using ‘a sync’ was bottlenecked at 15 minutes regardless of available bandwidth.

With the release of the HP 3PAR 20000 Series, comes a new feature which is streaming ‘a sync’ replication.

What Is Streaming ‘A Sync’ Replication

Essentially, if you have the bandwidth and cache available the source 3PAR will stream replication across to the target 3PAR reducing your RPO below 15 minutes.  I like to think of it as a best endeavours.

Replication Modes

When designing a replication infrastructure it’s important to know the transport method as well as the thresholds in terms of bandwidth and latency between source and target arrays.  This ensures that you are not only within a supported SLA, but also to ensure that write performance of the source array is not effected.

The table below shows supported thresholds.

Replication Modes

Architecture

The source array uses a local cache to maintain host write transactions in memory.  A concept known as ‘delta sets’ are used.

Source 3PAR Array

  • I/Os are transferred from primary array to secondary array as part of a delta sets
  • I/Os on the primary array that belong to a particular remote copy group are grouped together into delta sets
    • A delta set is made up of sub-set of I/Os, where sub-set represents I/Os owned by a remote copy group on each given node

Target 3PAR Array

  • A delta set is applied on the secondary RC volume group only after:
    • The entire delta set has been received in the secondary array cache
    • And the previous sets that this delta set depends upon have completed.
  • A secondary RC volume group is always in a crash consistent state, before or after the application of a delta set. It is not crash consistent during the application of a delta set.
    • If the delta set fails to apply on the secondary volume then the group stops and a fail back to the last coordinated snapshot is required

Remote Copy Architectuer

What About Write Bursts?

A write burst is when the array receives a significant number of writes which could last for a few minutes.  If the inter site link between source and target array is sufficient this has no impact.

It is when the inter site link cannot cope or the write cache gets filled then the source 3PAR will choose a random remote copy group to stop and a snapshot is taken.

Note: You have no control over which remote copy group

Once stopped these groups will start again at the next sync period.

Final Thoughts

This is a great feature set being added to the 3PAR 20,000 Series.  I’m sure when the next .1 release update is received you will be able to select which remote copy groups you would want to stop either due to a write burst or cache overflow.

With most 3PAR updates, I expect the streaming ‘a sync’ replication to find its way into the 7×00 series within a short period of time.

Installing: App Volumes Agent

VMFocus Wide Featured Image

In the previous post I covered ‘Installing: App Volumes Manager‘.  Now it’s time to install App Volumes Agent.

The App Volumes Agent has to key roles in life:

  1. It resides on a provisioning virtual machine and is responsible for the capture of an application.
  2. Agent that runs on a users virtual machine as a service.  It is responsible for handling the filter driver which looks after application calls and redirects to the AppStack and writeable volumes VMDK’s

Pre-Requisites – Provisioning Virtual Machine

These are the pre-requisites that I have identified so far for the capture virtual machine:

  • Ensure that the Provisioning VM operating system and ‘bitness’ is the same as the target virtual machines
  • Ensure that the Provisioning VM Server Pack and patch level is the same as the target virtual machines
  • Optimise the Provisioning VM operating system as per your target virtual machines

Installation

Ensure that you have downloaded the installer App Volumes from here

Don’t forget that App Volumes Manager and App Volumes Agent use the same installer, so we just need to launch the the setup file again.

Launch the App Volumes Setup > Click Next

App Volumes 01

I’m sure you will read the EULA before accepting it then click next

App Volumes 02

Select Install App Volumes Agent

Agent01

Click Next

Agent02

Enter your App Volumes Manager details and the communication port

Agent03

Click Install

Agent04

All done, Click Finish

Agent05

 

Quick reboot of the your provisioning virtual machine and you are ready to go.

In the next blog post I will be configuring an AppStack ready for deployment.

vCloud Air OnDemand – Road Trip

VMFocus Wide Featured Image

vcloud-connector-us-18I have written a couple of posts on VMware’s vCloud Air DRaaS offering which covered ‘The Good, The Bad & Ugly‘ and ‘Improvements‘ to the service.  The reason for these original posts was based on customer enquiries and I wanted to get under the skin of VMware’s offering.

Since then customers have been asking more questions around using the cloud for application testing and development.

Application Testing

The issue with testing applications on-premises is you may not have the compute or storage resources available, an example is you want to upgrade your ERP solution to the latest version and test it before going into production.

ERP solutions are often complex three tier applications which require a decent amount of horse power to run.  Often businesses are nervous about upgrading the production software, as if someone goes wrong it could effect the entire company.

The solution to this could be to use vCloud Air OnDemand with vCloud Connector to link on-premises vCenter to vCloud Air.  A simple clone of your ERP virtual machines and then copy them to vCloud Air and test the application upgrade.

Development

Many of the clients I deal with, don’t have access to dedicated vSphere Clusters and storage for development.  They are often nervous about giving developers a slice of their production vSphere clusters due to the effects a poorly designed application could have on their network and storage.

You could argue that placing limits on compute resources could mitigate the risk, but this then becomes a management overhead and politically it can cause all kinds of issues.

The solution to this could be again to use vCloud Air OnDemand.

vCloud Air OnDemand – Road Trip

With the above uses cases in mind, I wanted to give vCloud Air OnDemand a whirl.

The first step is to signup for vCloud Air OnDemand using the create a new MyVMware Account.

vCloud Air 1

Use this promotion code ‘Influencer2015′ and get a special offer of $500 in service credits.

After a few seconds you will receive an email to finalize your account settings, this essentially means create a password.  Then you are ready to login to your account.

Virtual Machine Provision

The ability to provision virtual machines quickly and easily is the acid test of an ‘cloud on demand service’.  The recording below shows my first interaction with the service.

Two issues cropped up:

  • Networking was not as straight forward as it could be.  You cannot assign a network to your first VM without it being powered on.  I wasn’t able to locate an add network in the vCloud Air area and used vCloud Director to assign the network.  Not sure how easy this would be for someone new accessing the service.  I believe it would cause some frustration to the user.
  • The password didn’t work that was assigned to my VMF-DC01 VM.  I even typed it in notepad to make sure I wasn’t going crazy.

Creating the second virtual machine I was able to assign a network on at the initial console screen and the password worked as well.

Final Thoughts

Overall the vCloud Air OnDemand experience was good.  However, first impressions count and not many of us spend time reading a manual for an ‘on demand’ service as we expect it to be straight forward.  If VMware can iron out the initial network and password issues, then my opinion would change from good to excellent.