70-533: Implementing Microsoft Azure Infrastructure Solutions – Prep & Exam Experience

VMFocus Wide Featured Image

mcsa-cloudplatform-logo-blkspec_impl_azure_infrasol_bwReaders of this blog know that my focus has shifted towards hybrid cloud and the architecture to enable customers to consume Microsoft Azure for varying requirements.

Having passed 70-534: Architecting Microsoft Azure back in March 2016, I had been putting off the  70-533 Implementing Microsoft Azure
Infrastructure Solutions
 due to the sheer volume of Azure work I was undertaking with customers which didn’t leave much time for studying.  Anyhow, I thought it was about time I sat the 70-533 exam which covers:

  • Implement Web Apps
  • Implement Virtual Machines
  • Implement Cloud Services
  • Implement Storage
  • Implement Azure Active Directory
  • Implement Virtual Networks

Preparation

I went back over my previous blog posts on the following topics to make sure I was up to speed on the basics again.

Microsoft Azure Concepts – Availability Sets

Microsoft Azure Concepts – Backups

Microsoft Azure Concepts – Clusters

Microsoft Azure Concepts – Content Delivery Network

Microsoft Azure Concepts – Failures

Microsoft Azure Concepts – Identity & Access

Microsoft Azure Concepts – Media Services

Microsoft Azure Concepts – Mobile Apps

Microsoft Azure Concepts – Networks

Microsoft Azure Concepts – Network Security Groups

Microsoft Azure Concepts – SQL Data Warehouse

Microsoft Azure Concepts – Storage

Microsoft Azure Concepts – Virtual Machines

After I gotten my head around these again, I decided it was time to focus on the exam objectives that would present the greatest challenge which was performing tasks in PowerShell.

The difficulty was that the exam covers both the Azure Classic Deployment and Azure Resource Manager, so I found myself doubling up on commands.

ProTip: Like me, if you are not a PowerShell guru, then I suggest you use PowerShell ISE as it’s far more intuitive than just a command prompt!

I purchased the book Implementing Microsoft Azure Infrastructure Solutions by Michael Washam and Rick Rainey.  This is an excellent introduction to the exam, but I wasn’t convinced it would be enough to see me through the exam.

To compliment the book, I watched a number of Pluralsight videos on Implementing Microsoft Azure Infrastructure Solutions by Tim Warner which really helped plug any gaps I had.

As well as reading and watching the training material, I also spent time using Azure.  I’m lucky enough to have a work sponsored Azure Subscription I can access to play around.  I strongly suggest you are familiar with Azure and also you understand the basics of PowerShell commands.

The Exam

I decided to take the Microsoft Online Proctored exam with Pearson Vue.  I have to say that the security requirements where far higher than attending a Pearson Vue site, I literally had to empty my pockets and show the invigilator every part of the room I was sitting in twice.

A few things you should note about taking a proctored exam:

  • If you have an external monitor, they will make you turn it around
  • If you have a cup of coffee they will ask you to remove it from the room
  • They expect your desk to be completely clear, so no pen or paper for making notes

The exam itself was broken down into forty eight individual questions consisting of your usual multiple choice or drag and drop.

The exam expects you to know the blueprint and the material contained within it.  You also need to be able to understand when and why you would make technical decisions for example:

When would you choose yo use Point-to -Site over a Site-to-Site VPN.

Final Thought

I’m pleased to say I passed the 70-533 Exam.  It was challenging due as I don’t spend all my time implementing Azure solutions (especially on the PowerShell front).  In fact a lot of my time is spent researching new Azure features for customers to see if they stand up from a technical and commercial perspective.

Overall, I would recommend the exam to anyone looking to develop their understanding of Microsoft Azure.

It appears that when you pass both the 70-533 and 70-534 exam you become certified a MCSA: Cloud Platform.  So my advice, is pick up the books and crack on with some studying, things are moving to the cloud whether we like it or not!

Microsoft Azure Concepts – Internet of Things

VMFocus Wide Featured Image

iotAccording to the IDC, the Internet of Things market will grow to £1.3 Trillion in 2020 with over 25 Billion connected devices.  Gartner also share this belief and predict that we will have 21 Billion connected devices in 2020 with a market valued at £2.4 Trillion.

With the advent of smart homes and the requirement to provide remote monitoring and predictive maintenance to every day items, the question this raises in my mind are:

  • How do you provide reliable connectivity to these devices?
  • How do you provide updates to these devices?
  • How do you collect and analyse the information?
  • How do you monitor and alter on the data sets?
  • How do you scale or contract the solution?
  • How do you provide availability and DR for the solution?

To answer these questions and more Microsoft released Internet of Things in February 2016.

What Is Microsoft Azure Internet of Things?

Microsoft Azure Internet of Things (IoT) comes in two flavours which are a pre-packaged solution using the IoT Suite or the IoT Hub which provides the connectivity and monitoring from IoT clients.

IoT Suite

The overall aim of the IoT Suite is a starting point for proof of concepts or early customer initiatives.   The IoT Suite is offered as two pre-configured solutions which are:

  • Remote Monitoring
  • Predictive Maintenance

Each of these solutions pulls together different Azure services to create the overall suite.  These include:

  • Azure IoT Hub
  • Azure Stream Analytics
  • Azure Blob Storage
  • Azure Document DB Storage
  • Azure Logic Apps
  • Azure Web Apps and Jobs

An example architecture of how these components fit together is shown below.

Azure IoT Suite.png

It’s important to note that Microsoft do not provide a packaged cost for IoT Suite.  Each component would need to be priced individually.

I’m sure you will agree, quite a few moving parts, so let’s break it down into bite size chunks.

  • IoT Devices these could be individual items or they could use an IoT Gateway.  The best way to think about a IoT Gateway is a car, you wouldn’t send each individual electrical component out to the cloud.  Instead the car would act as the IoT Gateway and the components within the car send their telemetry to it.
  • IoT Hub massive ingestion platform which provides bi-directional communication to IoT Devices.  The IoT Hub performs the initial collection of data and stores it in Azure Storage Blob
  • Storage Blob used to store the data in it’s raw format.  Before the data is processed this is your one source of truth, using cheap cloud storage makese sense increase you want to integrate the data multiple times.
  • Steaming Analytics to integrate the IoT data in real time and also providing a secondary analytic method
  • Wep App providing the user interface for users to access the platform via a web page or on mobile devices
  • Logic App providing the integration points and workflows into business systems
  • Document DB is where the device meta data could be held

IoT Hub

IoT Hub is essentially the control plane, enabling IoT Devices to connect using AMQP, MQTT and HTTP protocols.  Communication to IoT Hub is based on service assisted communication patterns which are detailed in this excellent blog post.  Perhaps the most prominent points are:

  • Security takes place over all other capabilities
  • Devices do not accept unsolicited network information
  • The communication path is secured at the application protocol layer
  • System level authorisation and authentication are based on per device identities

From an IoT Device perspective, the IoT Hub is responsible for:

  • Sending data to the IoT Device
  • Receiving data from the IoT Device
  • Initiating file uploads
  • Receive an update twin properties (items such as location details)

The IoT Hub is responsible for the following areas:

  • Receiving data from the IoT Device
  • Sending data to the IoT Device
  • Receive delivery acknowledgements
  • Receive file notification
  • Device identity management
  • Device twin management
  • Jobs management

This can be logically depicted in the following diagram.

azure-iot-hub

Device Management

IoT Hub enables you to manage end devices to perform the following business as usual operations:

  • Reboot Device
  • Factory Reset Device
  • Configure Software on Device
  • Firmware Update Device
  • Reporting Progress (data waiting to be collected)
  • Reporting Status (last time data was collected)

The supported devices within Microsoft Azure which have been tested against Azure IoT SDKS can be found in this article.

Monitoring

IoT Hub offers the ability to monitor the status of operations in real time on the following metrics:

  • Device Identity
  • Device Telemetry
  • Cloud to Device Commands
  • Connections
  • File Uploads

For example the ‘Connections’ monitoring could be used to identify devices which fall outside of acceptable upload thresholds meaning that the device is likely to have a hardware failure.

High Availability

I have to say that I was somewhat impressed that IoT Hub has inbuilt regional high availability as part of the standard service offering.  The recovery time objective offered is between two and twenty six hours, so bear in mind that you could be down for over a day.

If an outage of up to 26 hours isn’t acceptable to your business, then you could consider implementing a secondary IoT Hub.  Some considerations around this include:

  • Fronting IoT Hub with Azure Traffic Manager on a Web App that that checks the active IoT Hub
  • Exporting and importing the device identity from the primary IoT Hub region to the secondary IoT Hub region on a regular basis
  • Fail back logic when the primary IoT Hub region is restored

How Is It Priced?

Microsoft use four tiers for pricing IoT Hub which are based around the total number of messages per day.  This includes messages both to and from IoT Devices.

The table below is taken from Azure IoT Hub Pricing and is correct as of 25/10/2016.

EDITION TYPE PRICE (PER MONTH) TOTAL NUMBER OF MESSAGES/DAY MESSAGE METER SIZE
Free Free 8,000 0.5 KB
S1 £30.55 400,000 4 KB
S2 £305.45 6,000,000 4 KB
S3 £3,054.50 300,000,000 4 KB

Even thought the table mentions monthly pricing, IoT Hub is billed per day.  This means that you can choose to scale up or down between paid tiers at will.  It should be noted that Microsoft does not scale you automatically, instead they apply quotas and limits if you are using to many messages on your scale.  In converse if you aren’t using enough messages per day then Microsoft will leave you on the same level.

Windows Server 2016 – Role Upgrades

VMFocus Wide Featured Image

windows-server-2016On the 19th October 2016, Microsoft have clarified what can and cannot be upgraded in-place from Windows Server 2012 and 2012 R2 to Windows Server 2016.

The applications/services which cannot be directly upgraded are:

  • Active Directory Federation Services
  • Hyper-V
  • Print and Fax Services

More details can be found Server Role Upgrades and migration matrix for Windows Server 2016.

Server Role Upgradeable from Windows Server 2012 R2? Upgradeable from Windows Server 2012? Migration Supported? Can migration be completed without downtime?
Active Directory Certificate Services Yes Yes Yes No
Active Directory Domain Services Yes Yes Yes Yes
Active Directory Federation Services No No Yes No (new nodes need to be added to the farm)
Active Directory Lightweight Directory Services Yes Yes Yes Yes
Active Directory Rights Management Services Yes Yes Yes No
DHCP Server Yes Yes Yes Yes
DNS Server Yes Yes Yes No
Failover Cluster Yes with Cluster OS Rolling Upgrade process which includes node Pause-Drain, Evict, upgrade to Windows Server 2016 and rejoin the original cluster. Yes, when the server is removed by the cluster for upgrade and then added to a different cluster. Not while the server is part of a cluster. Yes, when the server is removed by the cluster for upgrade and then added to a different cluster. Yes No for Windows Server 2012 Failover Clusters. Yes for Windows Server 2012 R2 Failover Clusters with Hyper-V VMs or Windows Server 2012 R2 Failover Clusters running the Scale-out File Server role. See Cluster OS Rolling Upgrade.
File and Storage Services Yes Yes Varies by sub-feature No
Hyper-V Yes. (When the host is part of a cluster with Cluster OS Rolling Upgrade process which includes node Pause-Drain, Evict, upgrade to Windows Server 2016 and rejoin the original cluster.) No Yes No for Windows Server 2012 Failover Clusters. Yes for Windows Server 2012 R2 Failover Clusters with Hyper-V VMs or Windows Server 2012 R2 Failover Clusters running the Scale-out File Server role. See Cluster OS Rolling Upgrade.
Print and Fax Services No No Yes (Printbrm.exe) No
Remote Desktop Services Yes, for all sub-roles, but mixed mode farm is not supported Yes, for all sub-roles, but mixed mode farm is not supported Yes No
Web Server (IIS) Yes Yes Yes No
Windows Server Essentials Experience Yes N/A – new feature Yes No
Windows Server Update Services Yes Yes Yes No
Work Folders Yes Yes Yes Yes from WS 2012 R2 cluster when usingCluster OS Rolling Upgrade.

Credit to Mike Brannigan for bringing this to my attention.

Microsoft Azure Concepts – SQL Data Warehouse

VMFocus Wide Featured Image

azure-sql-data-warehouseIt has been widely reported that data growth is increasing year on year with more data being created in the past two years than in our entire history before this point.

Along with the increase in information is the requirement to report, analyse, manipulate and interrogate the data set.

With traditional on-premises solutions, you aim to size the Data Warehouse for peak workloads so that you have enough compute and storage performance to deal with demand.  This capital investment then leads to further question.

  • What about off-peak times, can I make use of the hardware for other purposes?
  • Who is going to manage and maintain the firmware, operating systems?
  • How are we going to expand the environment or are we sizing it for the future now?
  • What if our predicted future requirement calculations are wrong, can we expand compute and storage, also will we have the budget available?

To answer these questions and more Microsoft introduced Azure SQL Data Warehouse in July 2016.

What Is Azure SQL Data Warehouse?

Azure SQL Data Warehouse is a PaaS offering, meaning that there is no requirement to manage the hardware or operating system, you just consume what you need at any given time.

It’s built on a distributed architecture which decouples storage from compute, meaning you don’t have a hard tie between increasing compute power with storage capacity.  Therefore if you require more storage capacity but your compute resources are adequate you only pay for the extra storage.t

Microsoft use a Massively Parallel Processing system which means that each compute node contributes to the overall workload as shown in the diagram below.  This is versus a traditional Symmetric Multiprocessing system in which you scale resources up for example a SQL Server.

azure-smp-mpp

Each node takes a slice of the data from the database, when a query is ran a co-ordination task is undertaken to ensure that the query execution is performed in the most optimal way.

How Is It Priced?

Microsoft use the concept of a Data Warehousing Unit or DWU which is an underlying measure of the compute power of the database.

DWU are charged per hour and you can scale up or down within seconds between different tiers.

For example if it took 15 minutes to load three tables and run a report on 100 DWU then I would expect it to take 5 minutes to load the same tables and run the report on 300 DWU.

As well as the DWU cost thereis also a storage cost which is charged per GB on Azure Premium Storage.

One of the excellent features is that you can ‘pause’ your SQL Data Warehouse which means that you don’t pay for any compute resources during this time.  Let’s look at a practical example:

8am Monday – Normal working conditions ‘un-pause SQL Data Warehouse at 300 DWU’

6pm Monday – ‘Pause SQL Data Warehouse’

8am Tuesday – End of month reporting ‘un-pause SQL Data Warehouse and increase to 3000 DWU’

10am Tuesday – End of month reporting complete ‘scale down to 300 DWU’

For more details on pricing, please see SQL Data Warehouse Pricing.

Architecture

Azure SQL Data Warehouse is designed differently and also behaves differently to an on-premises environment.

Those of us who are used to making design decisions in a VMware network environment based on load distribution will be vaguely familiar with the concepts of a distribution key.  A distribution key determines the method in which Azure SQL Data Warehouse spreads data across nodes.

When data is loaded from a table into Azure SQL Data Warehouse it is split into 60 distributions (buckets of data) which are attached to different compute nodes.

Hash Distribution – A unique field within a table is used to distribute data.  The entry within the field is hashed and those fields returning the same value will be placed on the same compute node for example:

Row 1 Trainer would go to Compute Node 1

Row 2 Sock would go to Compute Node 2

Row 3 Trainer would go to Compute Node 1

Row 4 Shoe would go to Compute Node 3

The main drawback of Hash Distribution is you could end up with an data skew causing uneven workload spread across your distributions.

Round-Robin Distribution – Each row within the table is distributed to a different compute node for example:

Row 1 Trainer would go to Compute Node 1

Row 2 Sock would go to Compute Node 2

Row 3 Trainer would go to Compute Node 3

Row 4 Shoe would go to Compute Node 4

Table Types

Azure SQL Data Warehouse offers three different table types:

  1. Clustered Columnstore – Organises records by columns, has a high compression ratio but without secondary indexes.  Note this is the default table type in Azure SQL Data Warehouse.
  2. Heap – No index on the data, but much faster at loading as has no compression.  But does allow for secondary indexes.
  3. Clustered B-Tree Index – A table that is organised on a sorted column which is often the clustering key.  Enables fast single lookup, but doesn’t support compression.  But does allow for secondary indexes.

Azure SQL Data Warehouse users the concept of Table Partitioning, this is where a top level table is split into different data buckets as shown in the diagram below.

azure-table-partitioning

This approach has a number of benefits which include the ease of loading and removing data, the ability to target specific table partitions for maintenance operations and overall increased performance.

Within Azure SQL Data Warehouse it is best to use lower granularity partition schemes for example weekly or monthly rather than daily.

Loading Data into Azure SQL Data Warehouse

When loading data into Azure SQL Data Warehouse we have to bear in mind that it is a Massively Parallel Processing system which therefore means we want to load as much data in parallel as possible.

Depending on the DWU are you paying for, depends on the number of reader threads which are available to you. These are threads that read data in parallel. Note that the number of writer threads into the MPP remain the same.

DWU Readers
100 8
200 16
300 24
400 32
500 40
600 48
1,000 60

As you can see your DWU has a direct impact on how fast you can load data.

When data is read into Azure SQL Data Warehouse, you can choose to use single client loading methods such as SSIS and Azure Data Factory. Alternatively you can use PolyBase for parallel loading.

Loading data using a single-client methodology means that your ‘Control Node’ becomes your bottleneck.  A ‘Control Node’ is a special category of node that receives all the connections and sends the queries onto the compute nodes.

azure-control-node

The ‘Control Node’ is a static item and doesn’t increase or decrease depending on the amount of DWU you add or remove.  Therefore meaning that if you load with SSIS, then the ‘Control Node’ could become a bottleneck.

Loading data with PolyBase which is recommended for parallel loading means that data is taken straight into the compute nodes without the need for a ‘Control Node’.

Data is usually exported from SQL and then loaded into Azure Blob Storage as a CSV before being imported into the compute nodes using an automation routine.

azure-polybase

This means that using PolyBase loading scales more efficiently and is usually much faster.

Backups

Built into Azure SQL Data Warehouse is a backup functionality which provides a data warehouse consist backup every four to eight hours.  This means that you will receive between three and six backups per day which are stored for seven days.

You can also trigger snapshots of the Blob Storage on which the Azure SQL Data Warehouse resides if you want a one off or more frequent backup schedule.  Note this would be triggered by PowerShell and automated by Azure Automation see Using Azure PowerShell with Azure Storage.

Migration Utility

Microsoft provides a Migration Utility that supports SQL Server 2012 or above and Azure SQL Databases.  It provides a report pointing out possible migration issues which could be encountered when using Azure SQL Data Warehouse.  It can help to assist with schema and data migrations.

Microsoft Azure Concepts – Mobile Apps

VMFocus Wide Featured Image

Mobile Apps are a recent phenomenon in Microsoft Azure and form part of App Services, which are specifically targeted at the delivery applications on mobile technology.

What Are Mobile Apps

In a nutshell, Mobile Apps are a PaaS offering, which means that the developer can create and manage mobile applications without needing to worry about managing and maintaining the underlying operating system.

Depending on the App Service Plan chosen, will entitle you to a number of features for your website which are:

  • Amount of disk space
  • SLA
  • Number of instances
  • Auto Scale
  • Backups
  • Geo Distributed Deployment
  • Custom Domain
  • Staging Environment
  • Offline Sync
  • Active Mobile Devices

Most businesses would choose the Standard Tier as this provides an SLA of 99.95% along with Auto-Scale.  More information on the types of App Service plans can be found here.

As part of the Mobile Apps PaaS offering identity and access are included as this is underpinned via Azure Active Directory.  Meaning that if you use Azure AD Connect in either same sign on or single sign on you have the ability to provide enterprise level authentication using your on-premises directory services.  Or if the mobile application is going to be standaline, the ability to use social providers such as Facebook, Google or Twitter is provided.

Offline Data

Mobile coverage is an issue in the UK, I can be driving on the M1 and in some areas have no phone signal.  This is a challenge when developing mobile applications as you can’t expect the user to always be in an area that has excellent coverage.  Built into Mobile Apps is an offline sync client which uses SQL Lite to cache and modify data locally on the end device.  It works on a push basis with the end device providing the notification to Mobile Apps to receive data over REST API’s.

Offline sync is available on Andiod, Cordova, iOS, Windows.

Notification Hubs

Integrated with Mobile Apps are Azure Notification Hubs.  These allow you to send notifications to mobile devices using a Platform Notification System (PNS).  In the same way that offline sync is available across platforms, so are notifications hubs.

Typical use cases for Azure Notification Hubs are:

  • Breaking news or offers
  • Updates per location e.g. travel issues
  • Updates per group e.g. people with a similar interest
  • One time passwords for MFA

Upload Content

Mobile apps by their very nature usually require input from the client device.  This could be in the form of uploading documents or images to Azure Blob storage.  To achieve this you would use a Shared Access Signature (SAS), the client device calls the Mobile App to generate a SAS which passes a storage token back to the client device enabling the user to write data to your Azure Blob storage account on a time restricted basis.

The diagram below provides a logical overview of the use of mobile apps and how the components integrate.

mobile-apps-v0-1