HP ConvergedSystem 200-HC StoreVirtual System – Questions Answered

Background

HP released two offerings of the HP ConvergedSystem 200-HC StoreVirtual System last year.  Essentially they have taken ESXi, HP StoreVirtual VSA, OneView for vCenter and automated the setup process using OneView Instant On.

HP Converged System 200-HC Diagrams v0.1

Two models are available which are:

  • HP CS 240-HC StoreVirtual System, this has 4 nodes each with:
    • 2 x Intel E5-2640v2 2.2GHz 8 Core Processor
    • 128GB RAM
    • 2GB Flash Backed HP Smart Array P430 Controller
    • 2 x 10GbE Network Connectivity
    • 1 x iLO4 Management
    • 6 x SAS 1.2TB 10K SFF Hard Drives
    • Around 11TB of usable capacity
  • HP CS 242-HC StoreVirtual System, this has 4 nodes each with:
    • 2 x Intel E5-2648v2 2.2GHz 10 Core Processor
    • 256GB RAM
    • 2GB Flash Backed HP Smart Array P430 Controller
    • 2 x 10GbE Network Connectivity
    • 1 x iLO4 Management
    • 4 x SAS 1.2TB 10K SFF Hard Drives
    • 2 x 400GB Mainstream Endurance SSD
    • Around 7.5TB of usable capacity

These are marketed with the ability to provision virtual machines within 30 minutes.

What Does Provision Virtual Machines Within 30 Minutes Really Mean?

To answer this question you need to understand what HP have saved you from doing, which is:

  • Installing ESXi across 4 x Hosts
  • Installing vCenter to a basic configuration
  • Installing HP StoreVitrual VSAE to a basic configuraiton across 4 x Hosts
  • Pre-installed Management VM running Windows Server 2012 Standard that has OneView for vCenter and CMC for StoreVirtual Management

So after completing the initial setup, you do have the ability to upload an ISO and start deploying an OS image.

What About The Stuff Which Marketing Don’t Mention? AKA Questions Answered?

Database

  • SQL Express is used as the database (local instance on Management VM).  I have real concerns around the database if logging levels are increased to troubleshoot issues and/or the customer doesn’t perform an kind of database maintenance
    • I’m waiting on confirmation from HP as whether you can migrate the SQL database instance to a full blown version

Host Profiles

  • Grey area, these can be used.  However HP rather you stay with the base configuration of the nodes (much like the networking see below).

Licences

  • The solution is only supported using Enterprise or Enterprise Plus VMware licenses, with the preference being HP OEM.
  • Windows Server 2012 Standard is supplied as the Management VM.  Initially, this runs from a local partition and is then Storage vMotioned onto the HP Converged Cluster.  Windows licensing dictates that when a OS is moved across hosts using Standard Edition you cannot move the OS back for 90 days or you need to license each node for the potential number of VM’s that could be ran.
    • HP have confirmed that you receive 2 x Windows Server 2012 Standard licenses and DRS Groups Manager rules are configured to only allow the Management VM to migrate between these two ESXi Hosts.

Management Server

  • You are able to upgrade the Management Server VM in terms of RAM, CPU and Disk Space and be supported.
  • You cannot add additional components to the Management Server VM and be supported e.g. VUM, vCenter SysLog Service
    • I’m waiting on confirmation from HP around what is and isn’t supported, I would air on caution and not install anything extra

Networking

  • The 1GbE connections are not used apart from the initial configuration of the Management Server.  My understanding is that these are not supported for any other use.
  • HP prefer you to stay with the standard network configuration, this causes me concern.  10GbE Network providing Management, iSCSI, Virtual Machine and vMotion traffic.  How do you control vMotion bandwidth usage on a Standard vSwitch? You can’t a Distributed vSwitch is a much better option, but if you need to reconfigure a node, you will need to perform a vSS to vDS migration

Updates

  • You can upgrade individual components separately, however you must stay within the HP Storage SPOCK for the Converged System 200-HC StoreVirtual (Note a HP Passport login is required)

Versions

At the time of this post, the latest supported versions are as follows:

  • vSphere 5.5 U2 , no vSphere 6
  • vCenter 5.5 U2
  • HP StoreVirtual VSA 11.5 or 12.0
  • HP OneView for vCenter Storage/Server Modules 7.4.2 or 7.4.4
  • HP OneView Instant On 1.0 or 1.0.1
  • PowerCLI 5.8 R1

Final Thoughts

HP have put together a slick product which automates the initial installation of ESXi and gives you a basic configuration of vCenter.  What it doesn’t give you is  design to say that your workloads are going to be suitable on the environment and or a solution that meets a client requirements.

How To: Rehost HP StoreVirtual Licenses

I’m not sure exactly when, but HP changed the licensing portal from ‘Poetic’ to a new portal named ‘My HP Licensing Portal’.  All of the information looked exactly the same, however you could not rehost HP StoreVirtual Licenses.

The purpose of this blog post is to assist anyone who was in the same situation as me, scratching their head trying to figure it out!

Problem

You have an existing HP StoreVirtual license which ties the feature set to the first NIC MAC address on your StoreVirtual VSA.  You have changed, upgraded or redeployed your VSA and you need to rehost the license onto the new MAC address.

Solution

Browse to myhplicensing.hp.com and login for your email address and password that you use for your portal account.

Note: If you are not sure what email address is tied to your account login to HP Licensing for Software and select Administration > My Profile which will show your email address.

Once logged into My HP Licensing select Rehost Licenses

StoreVirtual License 01

Next select Rehost Licenses and click on the MAC Address you want to update

StoreVirtual License 02

Select the tick box to confirm this is the license that you want rehosting and then click Rehost.

StoreVirtual License 03

Select ‘Enter New Locking ID’ and enter the MAC Address of the first network adapter on your StoreVirtual.  Then click Next

StoreVirtual License 04

This bit takes a while but once done you will receive the license file which can be saved or emailed

StoreVirtual License 05

How To: HP StoreVirtual LeftHand OS 12.0 With T10 UNMAP

HP have announced the release of LeftHand OS 12.0 which finally includes T10 UNMAP which means we can now start and stay thin with StoreVirtual.

A list of feature enhancements are:

  • Space Reclamation
    • Reclaim space on thinly and fully provisioned volumes used by Windows Server 2012 or later, and vSphere 5 or later
  • StoreVirtual Multi-Path Extension Module (MEM) for vSphere
    • Provides data path optimization similar to StoreVirtual DSM for Microsoft MPIO
  • REST API for StoreVirtual
    • Enables automation and scripting of clusters, provisioning and volume management
  • StoreVirtual VSA Term LicenseManagement
    • Enforces term licensing for StoreVirtual VSA

So lets take LeftHand OS 12.0 for a spin and test out T10 UNMAP.

Centralised Management Console Upgrade

The first step is to upgrade your Centralised Management Console to LeftHand OS 12.0.  Once done, you will be greeted by your new 12.0 screen.  First impressions, it is a lot faster to discover StoreVirtual nodes and access Management Groups, well done HP!

StoreVirtual Node Upgrade

Just a word of warning, I would always recommend performing upgrades out of hours as when a StoreVirtual node reboots you will loose a percentage of your clusters performance e.g. if you have two nodes in your cluster and your reboot one, then you will loose approximately 50% of your performance.

The good news for those that are using physical StoreVirtuals, HP have reduced the reboot time.

When you are ready to upgrade, the procedure is as slick as always.  Download your updates via the CMC and then apply them to your nodes one at a time.

Enable Space Reclamation

Space reclamation is enabled manually at the Management Group level.  Right Click your Management Group and Select Enable Space Reclamation

Space Reclaimation 01

 

Next we receive a warning that once upgraded you cannot downgrade to previous versions of LeftHand OS that do not support space reclamation.

Enter your Management Group name, in my case DC01-MG01 and accept the disclaimer and enable Space Reclamation.

Space Reclaimation 01

I suggest checking your Device and RAID status to ensure everything is OK before moving forward.  This is done by selecting your Cluster, followed by the Node and then selecting Storage.  As you can see I have Adaptive Optimisation enabled and my RAID Status is normal.

Space Reclaimation 03

Space Reclamation Test

Space reclamation can be performed either on vSphere after a Storage vMotion has taken place or when files have been deleted from with a guest operating system.

In this test I’m going to perform a Storage vMotion from one datastore another and then zero the space on the VMFS file system.

The test is going to be ran on the datastore DC02-NODR02 which has a single virtual machine inside of it, with the following storage storage characteristics:

  • Datastore DC02-NODR02
    • Capacity 199.75GB
    • Provisioned Space 45.01GB
    • Free Space 177.29GB
    • Used Space 22.46GB

Space Reclaimation 08

  • Volume – 17.50GB consumed space
    • 200GB Fully Provisioned with Adaptive Optimisation enabled

Space Reclaimation 09

Next I’m going to perform a Storage vMotion of the virtual machine onto the datastore DC02-NODR03.  Time to grab a cup of tea before we move on and run VMKFSTools to reclaim the space.

VMKFSTools

Now the Storage vMotion has finished, we need to run vmkfstools on the datastore to reclaim the space.  Jason Boche has an excellent blog post entitled ‘Storage: Starting Thin and Staying Thin with VAAI UNMAP‘ on the vmkfstools command.

On an ESXi Host that can see the datastore DC02-NODR02, I’m going to run the command ‘vmkfstools -y 90’

Space Reclaimation 10

Note in a production environment you would reclaim the space out of hours and use 60% of the available space

If we now check the volume DC02-NODR02 it’s consumed space is 0.46MB which is the VMFS file system

Space Reclaimation 11

 

Monitoring Space Reclamation

HP have introduced some extra performance statistics to enable space reclamation to be monitored which include:

  • IOPS Space Reclamation
  • Latency UNMAP

These can be accessed by added to the Performance Monitor window so that you can verify the effect of space reclamation on your StoreVirtual node.

Space Reclaimation 12

Lessons Learnt: HP StoreVirtual P4500 10 GbE Upgrade & Virtual Connect

Purpose

The purpose of this blog post is to give you an insight into some of the quirky behaviour that I experienced during an upgrade of HP infrastructure, specifically in relation to HP StoreVitual 4500 and Virtual Connect.

Background

Existing HP infrastructure exists across a campus which has recently been upgraded to redundant 10Gbps links.

Site A contains:

  • 2 x HP Lefthand P4500 (before upgrade to LeftHand OS 11.5)
  • 1 x C7000 Blade Chassis with HP BL460c G7 blades

Site B contains:

  • 2 x HP Lefthand P4500 (before upgrade to LeftHand OS 11.5)
  • 1 x C3000 Blade Chassis with HP BL460c G6 blades
    • C3000 Blade Chassis to be disposed off

Site C contains:

  • HP FailoverManager for Lefthand

The underlying hypervisor is vSphere 4.1 which is to be upgraded once the hardware is in situ.

Design

The design was quite straight forward, to meet the customer requirements, we needed to:

  • Provide a 10 Gbps Core network using redundant HP5820 in am IRF stack
  • Introduce a vSphere Metro Storage Cluster on vSphere 5.5 U1
    • Ability to run workloads at either location
    • Provide operational simplicity
  • Introduce an additional C7000 Blade Chassis
  • Introduce HP BL460c Gen8 Blades for new
  • Introduce a performance tier for StoreVirtual using 4335
  • Introduce an archive tier for StoreVirtual using 4530
  • Upgrade existing P4500 to 10GbE

A logical overview of the solution is shown below.

Blog Post

Pre-Requisites

As part of the pre-requisite work the HP firmware had been upgraded as follows:

All new components had been upgraded to the same firmware and software levels.

Upgrade Purpose

The purpose of upgrade was to introduce/change the following items before vSphere was upgraded to 5.5 U1

  • HP 5820 Core
    • Change configuration to enable ESXi 4.1 Port Groups to be responsible for VLAN tagging
  • P4500 10GbE Cards
    • Existing 1GbE Cards to be used for iSCSi Management
    • New 10GbE Cards to be used for iSCSI Storage Traffic
  • Virtual Connect
    • Change configuration to enable ESXi 4.1 Port Groups to be responsible for VLAN tagging
  • vSphere
    • Update Port Groups so that ESXi is responsible for adding VLAN Headers

Lessons Learnt – Virtual Connect

On the new C7000 Chassis with HP BL460c Gen 8 Blades, Virtual Connect was used to logically separate bandwidth for four different networks with each containing traffic for a single subnet.  A VLAN tag was assigned to each subnet allowing ESXi 4.1 to be apply the VLAN headers.

From the ESXi DCUI we were unable to ping from VMkernel Management network to the HP5820 which was acting as the default gateway.  However placing a laptop into an ‘access port’ on the same VMkernel Management VLAN we could ping the default gateway on the  HP5820.

After some troubleshooting we found that the issue was with Virtual Connect, if you define a network as a ‘single network’ with a VLAN tag assigned to it, Virtual Connect very kindly removes the VLAN header.

Resolution: Select Multiple Networks rather than a Single Network

The next issue we came across was Virtual Connect on the existing C7000 with HP BL460c G7 Blades.  Virtual Connect would accept the changes to Shared Uplink Set and Server Profiles so that we were now using ‘Multiple Networks’ with VLAN tag’s however we couldn’t ping the default gateway on the HP5820 from the ESXi DCUI.

Again, after some troubleshooting we discovered that Virtual Connect allows you to make changes to existing networks from ‘Single’ to ‘Multiple Networks’ with the HP BL460c G7 Blades running, but these changes don’t take effect until after a reboot.

Resolution: After any Virtual Connect change reboot blade

 Lessons Learnt – HP P4500

When you upgrade the HP P4500 to 10GbE you add an additional 4GB RAM and the 10GbE card, fairly straight forward.  After the hardware installation we wanted to utilise the network cards as follows:

  • 2 x 10GbE in an Adaptive Load Balance bond for iSCSI Storage Traffic
  • 1 x 1GbE for iSCSI Management Traffic

To do this we need to break the existing Adaptive Load Balance bond on the 1GbE connections.  After breaking the bond we had no link lights on the HP5820 or P4500.  We started to scratch our heads and jumped on the KVM to see what had happened.  We soon discovered that when the bond is broken, the network interfaces are placed into ‘disabled’ state.

Resolution: Maintain KVM or iLO access when breaking an ALB bond

Next we placed an IP Address on the 1GbE interface so that we could continue to manage the array.  We enabled flow control on the 10GbE interfaces and also jumbo frames as this was part of the design and then finally created the ALB bond with the 10GbE interfaces having the default gateway applied to them.  We ran some simple ping tests to the Management IP Address which resulted in a ping response, however the 10GbE would not respond.  Not exactly where we wanted to be!

We broke the ALB bond on the 10GbE and we could ping the 1GbE interface and 10GbE interfaces.  This then lead to the discovery that you cannot use the 1GbE interfaces with 10GbE interfaces on the same subnet.  We didn’t have time to test the 1GbE interfaces on a different subnet to see if this configuration would work.

Resolution: Disable the 1GbE interfaces

Now we had 10GbE interfaces working using Adaptive Load Balacing, it was time to ensure that flow control was enabled.  We saw some very strange results either it was on some interfaces and off others!  A quick check of the HP5820 and flow control was enabled on the correct ports.  We carried out a number of test but still couldn’t get flow control to show as enabled:

  • Broke the ALB bod to manually enabled flow control
  • Shut down the HP5820 interfaces and enabled them
  • Restarted the HP P4500

We found the resolution by mistake.  On one of the nodes we performed a shutdown then power on rather than a restart, flow control was enabled.  It appears that it is only on the power on operation the P4500 negotiate flow control settings with the upstream switch.

Resolution: After enabling flow control, shutdown and power on P4500

Whats New? StoreVirtual VSA – LeftHand OS 11.0

T-smb-storevirtual-VSA__153x115--C-tcm245-1404104--CT-tcm245-1237012-32It’s no secret that I’m a fan of the StoreVirtual, which you can see by the number of blog posts I have made about the subject.

HP have announced the next iteration of LeftHand OS, which is version 11.0, this has a number of enhancements which are covered by Kate Davis (@KateAtHP).  These include:

  • Smarter updates with Online Upgrade enhancements to identify updates per management group, plus you can choose to only download newer versions, hooray!
  • Faster performance for command-line interface improves response times for provisioning and decommissioning of storage, and retrieving info about managements groups, volumes and clusters
  • Increased IO performance on VMware vSphere with support for ParaVirtualized SCSI Controller (PV SCSI) which provides more efficient CPU utilization on the host server
  • More control over application-managed snapshots for VMware and Microsoft administrators with quicker and simpler install and configuration process
  • Optimization of snapshot management to minimize the burden on the cluster when handling high-frequency snapshot schedules with long retention periods
  • Fibre Channel support for HP StoreVirtual Recovery Manager for servers with FC connectivity to StoreVirtual clusters can be used to recover files and folders from snapshots.
  • LeftHand OS 11.0 will be certified with at least one 10Gbe cards for use with StoreVirtual VSA on launch.

What I’m most excited about is the new Adaptive Optimization feature which is introduced in LeftHand OS 11.0 .  Last night Calvin Zito (@HPStorageGuy) hosted a live podcast covering AO in more depth.  So without further a due:

  • Adaptive Optimization will be completely automated, with a simple on or off.
  • Adaptive Optimization will work automatically e.g. no schedule
  • Adaptive Optimization will use a ‘heat tier’ map to work out the hot areas and check the IO and CPU levels, if these are high then AO will not move the blocks, it will wait until IO and CPU levels have dropped and then perform the region moves.
  • Adaptive Optimization will allow for support of two storage tiers and works at node level.
  • Adaptive Optimization will use a chunk size of 256K for region moves.
  • Adaptive Optimization will work on ‘thick’and ‘thin’ volumes
  • Adaptive Optimization will work on all snapshots of a given volume.
  • Adaptive Optimization will be included for free for anyone who has a StoreVirtual VSA 10TB license already.
  • Adaptive Optimization will not be included for the new 4TB StoreVirtual VSA license
  • Adaptive Optimization will work with PCIe Flash, SSD, SAS and SATA drives.

During the podcast I asked a number of questions, one of which is the potential to use HP StoreVirtual VSA with HP IO Accelerator cards, with C7000 blades and local storage for VDI deployments.  The StoreVirtual representative (who was at LeftHand networks before HP acquired them) mentioned this is the one of the primary use cases for AO and they are going to be performing some benchmarks.

The StoreVirtual representative was also able to field a number of other questions for the StoreVirtual road map which are:

  1. T10 UNMAP will be coming, just not in LeftHand OS 11.0
  2. Changes to LeftHand OS will be made to make manual adjustments to gateway connections for vSphere Metro Storage Clusters see this blogpost.
  3. Adaptive Optimization is likely to be coming to the physical StoreVirtual.

We also spoke about performance, the StoreVirtual representative explained about all the lab tests they had performaned and to get StoreVirtual working at it’s correct capacity you should try and keep the number of nodes per management group to 32 and have a maximum of 16 clusters.