vSphere 5.0 & 5.1 – Licenses After End of Support?

vSphere 5.0 5.1It’s well documented on the VMware Product Lifecycle Matrix that on 24th August 2016 vSphere 5.0 and 5.1 general support will end, moving into technical guidance.

Way back on 21st May 2014 VMware moved vSphere 4.x into ‘end of general support’.  Then a few months later on 15th August 2014, VMware removed the ability to download, downgrade or generate new license keys for vSphere 4.x see VMware KB2039567.  This essentially meant that you couldn’t expand your unsupported hypervisor environment.

So the question is how long until VMware remove the ability to download or generate vSphere 5.0 and 5.1 licenses?

vSphere Replication – Consider These Points Before Using It

vSphere Replication has been embedded in the ESXi kernel for quite sometime now.  When a virtual machine performs a storage ‘write’ this is mirrored by the vSCSI filter at the ESXi Host level before it is committed to disk.  The vSCSI filter sends its mirrored ‘write’ to the vSphere Replication Appliance which is responsible for transmitting the ‘writes’ to it’s target. normally in a DR site.

The process is shown at a high level in the diagram below.

vSphere Replication v0.1

I’m often asked by customer if they shoud consider using it given the benefits which it provides, which include:

  • Simplified management using hypervisor based replication
  • Multi-point in time retention policies to store more than one instance of a protected virtual desktop
  • Application consistency using Microsoft Windows Operation System with VMware Tools installed
  • VM’s can be replicated from and to any storage
  • An initial seed can be performed

As a impartial adviser, I have to provide the areas in which vSphere Replication isn’t as strong.  These are the points, I suggested are considered as part of any design:

  • vSphere Replication relies on the vRA, if this is offline or unavailable then replication stops for all virtual machines.
  • vSphere Replication requires the virtual machine to be powered on for replication to occour
  • vSphere Replication is not usually as efficient as array based replication which often have compression and intelligence built into the replication process.  If you have limited bandwidth you may violate restore point objectives
  • vSphere Replication will reduce the bandwidth available to other services/functions if you are using logically separated networks over 10GbE
    • Note that Network IO Control can be used to prioritise access to bandwidth in times of contention, but required Enterprise Plus licenses
  • vSphere Replication requires manual routing to send traffic across a replication VLAN which increases the complexity of the environment
  • vSphere Replication is limited to 200 virtual machines per Replication Appliance and 2000 virtual machines overall as detailed in VMware KB2102453
  • After an unplanned failover and reprotect, vSphere Replication uses an algorithm to perform a checksum, this can result in a full sync depending on length of separation and amount of data being changed.
  • vSphere Replication only provides replication for powered on virtual machines
  • In a HA event on an ESXi Host at the Production site will trigger a full synchronisation of the virtual machines that resided on the failed host. See vSphere Replication FAQ’s

The last point which for me is a deal breaker.  Let’s consider that last point again, if we have an ESXi Host that has a PSOD then all of the VM’s will require a full synchronisation.

What’s The Impact?

If we have an inter-site link of 100Mbps which has an overhead of 10%, this gives us an effect throughput of 90Mbps.

We have an average sized VMware environment with a couple of VM’s which hold 2TB of data each which are being replicated across a 100Mbps inter-site link then you are looking at over 4 days to perform a full synchronisation.

We also need to consider the impact on the rest of your VM’s who will have their restore point objective violated as the bandwidth is being consumed by the 2 x 2TB VM’s.  Not exactly where you want to be!

The Maths Per 2TB VM

8Mb equals 1MB

2TB equals = 16,777,216 Mbps

16,777,216 Mbps / 90 Mbps = 186,414 Seconds

186,414 seconds / 60 seconds = 3,107 Minutes

3,107 minutes / 60 hours = 51 Hours 47 Minutes

External Platform Services Controller, The New Standard?

vCentre 5.5 EmbeddedIn vSphere 5.x versions, the most common deployment topology was a vCenter with all the components installed on the same virtual machine.  The design choices for using a single virtual machine with all services running on them included:

  • Simplicity of management
  • Backup and restore, with only a single virtual machine to protect
  • Reduction of overall license costs for guest operating system
  • Requires less compute resources to run
  • Reduced complexity of HA rules (External SSO start first then vCenter)
  • Single virtual machine to secure and harden

Depending on the size of the environment, you might see one or many vCenter’s with embedded services.  External services such as SRM, vROPs, Horizon View would then hook into the vCenter.

From an architectural standpoint, you knew that deploying a vCenter with embedded services you would cover most if not all future third party deployment scenarios e.g. add on SRM you are covered.

With vSphere 6, this has all changed and I would question if deploying a vCenter with an embedded Platform Services Controller is the right way to go.

Deprecated Topology

VMware KB2108548 shows that a single vCenter with an embedded Platform Services Controller is a supported topology.

Single vCenter

Excellent, you might say. But what if I want to add third party services such as SRM in the future?  Well the answer to that is you won’t be supported in the future using this topology.

Deprecated Topology

This means that you would need to change the architecture from what was originally deployed to the below to be in a supported configuration.

Supported Topology

Changing vCenter 6 into a supported architecture isn’t straight forward.  The main gotcha that I’m aware of is that you are unable to change a vCenter using an embedded Platform Services Controller to an external Platform Services Controller.  The only way that I’m aware of is to upgrade to vCenter 6.0 U1 and follow VMware KB2113917.

The impact of the following points also needs to be considered when changing from an embedded to external Platform Services Controller:

  • SSL Certificates
  • Third Party Plugins
  • Third Party Applications such as vROPs
  • Backup & Restore
  • Change Control
  • Security and Permissions

Final Thought

vCenter with an embedded Platform Services Controller are applicable to small environments in which you have a static topology with no requirement for enhanced linked mode or integration with external products.  Consider the upgrade path from an embedded Platform Services Controller to an external Platform Services Controller.

In any environment where their is a possibility that you will need to integrate vCenter with a third party piece of software such as SRM or vRA or if you require Enhanced Linked Mode then start your architecture with an external Platform Services Controller.

Upgrading vSphere 5.5 ‘Simple Install’ with SRM and Linked Mode to vSphere 6

A fairly common deployment topology with vSphere 5.5 was to use the ‘Simple Install’ method which placed all the individual vCenter components onto a virtual or physical vCenter Server.

This would then hook into an external virtual or physical SRM server.  With Linked Mode used for ease of management.

An example vSphere 5.5 topology is shown below.vSphere 5.5 Simple Install

As well as the normal considerations with vSphere upgrades around:

  • Hardware compatibility and firmware versions
  • Component interoperability
  • Database compatibility
  • vCenter Plugins
  • VM Hardware & Tools
  • Backup interoperability
  • Storage interoperability

We now have to consider the Platform Services Controller.

Platform Services Controller

The Platform Services Controller is a group of infrastructure services containing vCenter Single Sign-On, License Service, Lookup Service and VMware Certificate Authority.

vCenter SSO Provides secure authentication services between components using secure token exchange.  Rather than relying on a third party such as Active Directory.

vSphere License Provides a common license inventory and management capabilities

VMware Certificate Authority Provides signed certificates for each component.

The issue arises with vCenter SSO component, as most people would have opted for vSphere 5.5 ‘Simple Install’.  This means you end up with an embedded Platform Services Controller, see ‘How vCenter Single Sign-On Affects Upgrades

The embedded Platform Services Controller topology has been deprecated by VMware, see ‘List of Recommended Topologies for VMware vSphere 6.0.x‘.  This is also confirmed in VMware Site Recovery Manager 6.1 documentation under ‘Site Recovery Manager in a Two-Site Topology with One vCenter Instance per Platform Services Controller

What Does This Mean?

Due to the architectural changes between vSphere 5.5 and 6.  You cannot perform an in-place upgrade from vSphere 5.5 to vSphere 6 if you originally selected ‘Simple Install’ as you will end up with an deprecated topology.

The only choice will be a new vCenter 6 using the topology shown below.

vSphere 6 PSC with SRM

This also means you will need to deploy an extra two virtual machines to support this configuration.

vSphere 5.x Space Reclamation On Thin Provisioned Disks

Space reclamation can be performed either on vSphere after a Storage vMotion has taken place or when files have been deleted from within a guest operating system.

With the release of LeftHand OS 12.0 as covered in my post ‘How To: HP StoreVirtual LeftHand OS 12.0 With T10 UNMAP‘, I thought it would be an idea to share the process of space reclamation within the guest operating system.

The reason for covering space reclamation within the guest operating system, is that I believe it’s the more common in business as usual operations.  Space reclamation on vSphere and Windows is a two step process.

  • Zero the space in the guest operating system if you are running Windows Server 2008 R2 or below.
    • UNMAP is enabled automatically as in Windows Server 2012 or above
    • If VMDK is thin provisioned you might want to shrink it back down again
  • Zero the space on your VMFS file system

I’m going to run space reclamation on a Windows Server 2008 R2 on a virtual machine called DC01-CA01 and has the following storage characteristics:

Original Provisioned Space

  • Windows C: Drive – 24.9GB free space
  • Datastore – 95.47GB free space
  • Volume – 96.93GB consumed space
    • 200GB Fully Provisioned with Adaptive Optimisation enabled

Space Reclaimation 05

Next I’m going to drop two files onto the virtual machine which total 2.3GB in space.  This changes the storage characteristics of DC01-CA01 to the following:

Increased Provisioned Space

  • Windows C: Drive – 22.6GB free space
    • 2.3GB increase in space usage
  • Datastore – 93.18GB free space
    • 2.29GB increase in space usage
  • Volume – 99.22GB consumed space
    • 2.29GB increase in space usage

Space Reclaimation 06

Sdelete

Next I have deleted the files from the C: Drive on DC01-CA01 and emptied the recycle bin.  Followed by running sdeldete with the command parameters ‘sdelete.exe -z C:’ This takes a bit of time, so I’m going to make a cup of tea!

Space Reclaimation 07

WARNING: Running Sdelete will increase the size of the thin provisioned disk to it’s maximum size.  Make sure you have space to accommodate this on your volume(s).

VMKFSTools

Now sdelete has finished, we need to run vmkfstools on the datastore to shrink the thin provisioned VMDK back down to size. To do this the virtual machine needs to be powered off.

SSH into the ESXi Host and CD into the directory in which your virtual machine resides.  In my case this is cd /vmfs/volumes/DC01-NODR01/DC01-CA01

Next run the command ls -lh *.vmdk which shows the space being used by the virtual disks.  Currently stands at 40GB.

Space Reclaimation 13

Next we want to get rid of the zero blocks in the MDK by issuing the command vmkfstools –punchzero DC01-CA01.vmdk

Space Reclaimation 15

Now that’s done let’s check our provisioned space to see what is happening.

Interim Provisioned Space

  • Windows C: Drive – 24.9GB free space
    • Back to the original size
  • Datastore – 95.82GB free space
    • 0.35GB decrease from original size
  • Volume – 121.35GB consumed space
    • 24.42GB increase from the original size!

Space Reclaimation 16

So what’s going on then?  Well Windows is aware that blocks have been deleted and passed this information onto the VMFS file system, which has decreased the VMDK size using the vmkfstools –punchzero command, however no one has told my HP StoreVirtual it can reclaim the space and allocate it back out again.

The final step is to issue the vmkfstools -y 90 command.  More details about this command are covered in Jason Boche’s excellent blog post entitled ‘Storage: Starting Thin and Staying Thin with VAAI UNMAP‘ on this function.

Note: vmkfstools was deprecated in ESXi 5.1 and replaced with esxcli storage vmfs unmap -l datastorename  See VMware KK2057513 for more details

WARNING: Running vmkfstools -y 90 will create a balloon file on your VMFS datastore.  Make sure you have space to accommodate this on your datastore and that no operations will happen that could drastically increase the size of the datastore whilst the command is running

Space Reclaimation 17

One final check of provisioned space now reveals the following:

Final Provisioned Space

  • Windows C: Drive – 24.9GB free space
    • Back to the original size
  • Datastore – 95.81GB free space
    • 0.34GB decrease from original size
  • Volume – 95.04GB consumed space
    • 1.89GB decrease from the original size

Final Thought

Space reclamation has three different levels, guest operating system, VMFS file system and the storage system.  Reclamation needs to be performed on each of these layers in turn so that the layer beneath knows it can reclaim the disk space and allocate it out accordingly.

The process of space reclamation isn’t straight forward and should be ran out of hours as each step will have an impact on the storage sub system especially if it’s ran concurrently across virtual machines and datastores.

My recommendation is to reclaim valuable disk space out of hours to avoid potential performance or capacity problems.