Part 1 – Configuring Azure Application Gateways with AD FS

This is the first in a short series of blog post which is aimed at the configuration of an Azure Application Gateways.

Why might you ask am I creating a blog post series? For two reasons, firstly I think that the Application Gateway provides an extra level of protection for internet facing applications and secondly I found the Microsoft documentation lacking in a few areas.

What is an Application Gateway?

Application Gateways are a dedicated virtual appliance providing application delivery controller services.

Benefits of using Application Gateway are:

  • Provides Layer 7 load balancing and routing
  • SSL Offload, taking the burden of decrypting traffic from Internet facing servers onto the Application Gateway
  • End to End Encryption by terminating SSL connection onto the Application Gateway, applies routing rules and then re-encrypts traffic
  • Cookie Based Session Affinity to ensure users are directed back to the same session
  • Protects web applications from common attack scenarios such as cross-site scripting, SQL injection and session hijacks using web application firewall capabilities
  • Custom health probes enabling specific application paths to be monitored

Drawbacks of using Application Gateway are:

  • Increased complexity versus Load Balancers
  • Can only be deployed when it is the first resource within a subnet

In this scenario I have AD FS running on Windows 2016 which is running on Microsoft Azure and is integrated with Azure AD via Azure AD Connect.  A logical overview of the configuration is shown below.

Azure AD FS v0.1.png

The plan is to extend this design and include an Application Gateway running Web Application Firewall functionality.  A logical configuration of the desired state is shown below.

Azure AD FS WAF v0.1.png

Before we being, I recommend that you verify your AD FS configuration to make sure it’s functioning correctly.  Also you will need your AD FS certificate available in order to undertake the SSL Offload onto the Application Gateway.

Getting Everything Ready

I know you are itching to crack on, but I try and work in a logical order.  So first of all make sure you have your subnets defined correctly in Azure.  The configuration I’m using is as follows:

VMF-WE-SUB01 – This subnet is the Trusted Network in the diagram above.

VMF-WE-SUB02 – This subnet is the DMZ Internal Network in the diagram above.

VMF-WE-SUB03 – This subnet is the DMZ External Network in the diagram above and will be used for the Application Gateway

Important, the Application Gateway must be the first resource deployed in a newly created subnet

WAP Servers

We need to get the thumbprint for our AD FS Certificate and ensure this is bound correctly.  Run the following command to obtain the Certificate Hash and Application ID

netsh http show sslcert


Next we need to run the command on both WAP servers

netsh http add sslcert ipport= certhash=f2d9bb93d29a2c2c0835f4a4cb2d67d51efc5706 appid={5d89a20c-beab-4389-9447-324788eb944a}

To verify the command has ran correctly, view your SSL Certificates again and you should see IP:Port tied to your AD FS Certificate.


Deploy Application Gateway

Within the freshly created VMF-WE-SUB03 we are going to deploy an Application Gateway.

Let’s start by entering the basics, I calling mine an imaginative VMF-WE-AG01.  It’s going to be a WAF, so I have selected this.  Finally, I will be using an existing Resource Group.

WAF 01

The VNet is VMF-WE-VNET01 and the subnet is VMF-WE-SUB03.  We are gong tro create a Static Public IP Address (I’m calling mine VMF-WE-AG01-PIP).  Finally we will leave the Listener Configuration as HTTP for now.

WAF 02

That’s the Application Gateway deployment beginning.  It’s going to take a while so suggest you make a brew and get yourself ready for the next instalment.


HPE StoreVirtual for Hyper-V .NET 3.5

VSAAs part of transitioning my lab to Hyper-V I’m using a HPE StoreVirtual VSA to provide shared storage to the Hyper-V Hosts.

When trying to load ‘HPE_StoreVirtual_VSA_2014_and_StoreVirtual_FOM_Installer_for_Microsoft_Hyper_V_TA688-10552’ I encountered the error ‘The Hyper-V for VSA deployment wizard requires version of .NET 3.5 to be installed on the system’.


Easy, I thought go into roles and features and add .NET 3.5.  However, you encounter the following error ‘Do you need to specify an alternate source path? One or more installation selections are missing source files on the destination’.

.NET 3.5 Error


The first thing to do is mount your Windows Server 2016 Datacenter CD.  Once done, select ‘Specify an alternate source path’

.NET 3.5 Resolution 1

Type in G:SourceSxS and click OK, followed by Install.

.NET 3.5 Resolution 2


Just in Time Virtual Machine Access

Security Centre.png

Consider for a moment, the attack vector on your virtual machines.  You may have some ports exposed to the public internet , however these are likely to be protected using Next Generation Firewalls and perhaps even a DDoS scrubbing service from your ISP.

Perhaps the largest attack vector are your management ports such as SSH, RDP and WMI to name but a few.  When these ports are open, it allows anyone to try and obtain access  whether it is a authorised or not.

This is where ‘Just in Time Virtual Machine Access’ steps in to reduce your overall attack surface.  Access to management ports are closed and access is only granted from either trusted IP’s or per request.

How Does It Work

Just in Time (JIT) works in conjunction with Network Security Groups (NSG) and Role Based Access Control (RBAC) to open up management ports on a timed basis.

  • Works for VMs which are both public and private accessible
  • Requires write access the VM

The second point makes perfect sense, we have customers who have read access to certain elements within the Azure portal to review logs or performance charts, but aren’t allowed access to the virtual machines.

To gain access on a desired management port, the requester must have ‘Contributor’ rights to the VM.  Which means that the following points need to be considered:

  • The requester requires an Azure AD Account
  • RBAC configuration
  • Access for third parties using Azure B2B

Configuration Choices

At the time of writing, you can define the following conditions per VM policy:

  • Port
  • Protocol (TCP/UDP)
  • Allowed Sources either Per Request or IP Range
  • Maximum Request Time  1 to 24 Hours

Requesting Access

Once the JIT policy has been applied to the VM.  A user logs into the Azure Portal and then has to open Security Center.  From within this they need to select Just in time VM access, select the VM and ‘Request Access’ choosing the Ports and Time Frame required.

Final Thoughts

The process to enable JIT is straight forward but does require some detailed consideration on how RBAC is configured.

Requesting access to a VM is currently quite clunky it would be great if a JIT portal was available for this purpose.

StorSimple Overview Subscription Model

Esh Group - StorSimple v0.1

Microsoft have changed the model for the StorSimple devices.  In the previous iteration it was based on an upfront commit to Azure Consumption of either $60K or $100K.  Under the new model, it’s subscription based which means that for:

  • $1,333 or £999 per month you can bag yourself an 8100 device
  • $1,916 or £1436 per month for a 8600 device

StorSimple Overview

StorSimple is a Cloud-integrated Storage (CiS) solution that stores highly active or heavily used data locally while it moves older and less frequently used data into the cloud.   StorSimple is designed to be a best-of-both-worlds solution for storage, backup, and recovery. While on-premises storage is more appropriate for data that undergoes real-time processing, cloud storage is the better option for archiving and housing your periodic backups and infrequently used files.

  • Data transmission between the StorSimple system and cloud storage are encrypted using SSL, supporting up to AES 256 bit session encryption during data transfers between the StorSimple system and Microsoft Azure Storage.
  • The StorSimple 8600 model offers up to 500TB of storage that can be allocated and this is split between the local device (approx. 38TB before compression) and the cloud.
  • Microsoft Azure StorSimple automatically arranges data in logical tiers based on current usage, age, and relationship to other data. Data that is most active is stored locally, while less active and inactive data is automatically migrated to the cloud.
  • Microsoft Azure StorSimple uses deduplication and data compression to further reduce storage requirements. Deduplication reduces the overall amount of data stored by eliminating redundancy in the stored data set. As information changes, StorSimple ignores the unchanged data and captures only the changes. In addition, StorSimple reduces the amount of stored data by identifying and removing unnecessary information.

Disaster Recovery

StorSimple provides device failover using the backup copies of on-premises volumes held within Microsoft Azure.  During a failure scenario the Microsoft Azure based StorSimple Device Manager rehydrates the secondary StorSimple with the data held within the cloud based Storage Account.

It should be noted that dependent on the backup schedule, data change rate and network bandwidth to Microsoft Azure, data loss is possible.

StorSimple Conceptual Design v0.1.png

Microsoft Azure – Auto Scaling

Autoscaling v0.1The ability to dynamically scale to a public cloud was one of the mantra’s I used to hear a couple of years ago.  When reality struck and customers realised that there monolithic applications wouldn’t be suitable for this construct they realised they would need to re-architect.

Wind forward a couple of years and the use of Microsoft Azure Auto Scaling has become a reality, so with this in mind  I thought it would be a good idea to share a blog post on the subject.

What Is Auto Scaling?

Auto Scaling is the process of increasing either the number of instances (scale out/in) or the compute power (scale up/down) when a level of demand is reached.

Scale Up/Down

Scale Up or Down is targeted at increasing or decreasing the compute assigned to a VM.  Microsoft have a number of ways in which you can ‘scale up’ on the Azure platform.  To vertically scale you can use any of the following:

  • Manual Process – Simple keep your VHD and deploy a new VM with greater resources.
  • Azure Automation – For VM’s which are not identical you can use Azure Automation with Web Hooks to monitor conditions e.g. CPU over ‘x’ time greater than ‘x’ and then scale up the VM within the same series of VM
  • Scale Sets – For VM’s which are identical you can use Scale Sets which is a PaaS offering which ensures that fault, update domains and load balancing is built in.

Note that using a Manual Process, Azure Automation and Scale Sets to resize a VM will require a VM restart

The diagram below provides a logical overview of a Scale Set.

Scale Set v0.1

Scale Out/In

Scale Out or In is targeted at increasing or decreasing the number of instances, which could be made up of VM’s, Service Fabric, App Service or Cloud Service.

Common approaches are to use VM’s for applications which will support Scale Out/In.  Typically a piece of middleware that performs number crunching but holds no data or perhaps a worker role that is used to transport data from point a to be.

For websites it is more common to use App Service ‘Web Apps’ which in a nutshell provides a PaaS service and depending on the hosting option chosen Standard, Premium or Isolated will dictate the maximum number of instances and Auto Scale support.


Auto Scaling requires time to scale up or out, it doesn’t respond to a single spike in CPU usage, it looks at averages over a 45 minute period.  Therefore it is suggested that if you know when a peak workload is likely it could be more efficient to deploy Auto Scaling using a schedule.

To ensure that a run away process doesn’t cause costs to spiral out of control, use tags, a different Azure subscription, email alerting or perhaps even limit the number of instances on Auto Scale.