How To: HP StoreVirtual LeftHand OS 12.0 With T10 UNMAP

HP have announced the release of LeftHand OS 12.0 which finally includes T10 UNMAP which means we can now start and stay thin with StoreVirtual.

A list of feature enhancements are:

  • Space Reclamation
    • Reclaim space on thinly and fully provisioned volumes used by Windows Server 2012 or later, and vSphere 5 or later
  • StoreVirtual Multi-Path Extension Module (MEM) for vSphere
    • Provides data path optimization similar to StoreVirtual DSM for Microsoft MPIO
  • REST API for StoreVirtual
    • Enables automation and scripting of clusters, provisioning and volume management
  • StoreVirtual VSA Term LicenseManagement
    • Enforces term licensing for StoreVirtual VSA

So lets take LeftHand OS 12.0 for a spin and test out T10 UNMAP.

Centralised Management Console Upgrade

The first step is to upgrade your Centralised Management Console to LeftHand OS 12.0.  Once done, you will be greeted by your new 12.0 screen.  First impressions, it is a lot faster to discover StoreVirtual nodes and access Management Groups, well done HP!

StoreVirtual Node Upgrade

Just a word of warning, I would always recommend performing upgrades out of hours as when a StoreVirtual node reboots you will loose a percentage of your clusters performance e.g. if you have two nodes in your cluster and your reboot one, then you will loose approximately 50% of your performance.

The good news for those that are using physical StoreVirtuals, HP have reduced the reboot time.

When you are ready to upgrade, the procedure is as slick as always.  Download your updates via the CMC and then apply them to your nodes one at a time.

Enable Space Reclamation

Space reclamation is enabled manually at the Management Group level.  Right Click your Management Group and Select Enable Space Reclamation

Space Reclaimation 01

 

Next we receive a warning that once upgraded you cannot downgrade to previous versions of LeftHand OS that do not support space reclamation.

Enter your Management Group name, in my case DC01-MG01 and accept the disclaimer and enable Space Reclamation.

Space Reclaimation 01

I suggest checking your Device and RAID status to ensure everything is OK before moving forward.  This is done by selecting your Cluster, followed by the Node and then selecting Storage.  As you can see I have Adaptive Optimisation enabled and my RAID Status is normal.

Space Reclaimation 03

Space Reclamation Test

Space reclamation can be performed either on vSphere after a Storage vMotion has taken place or when files have been deleted from with a guest operating system.

In this test I’m going to perform a Storage vMotion from one datastore another and then zero the space on the VMFS file system.

The test is going to be ran on the datastore DC02-NODR02 which has a single virtual machine inside of it, with the following storage storage characteristics:

  • Datastore DC02-NODR02
    • Capacity 199.75GB
    • Provisioned Space 45.01GB
    • Free Space 177.29GB
    • Used Space 22.46GB

Space Reclaimation 08

  • Volume – 17.50GB consumed space
    • 200GB Fully Provisioned with Adaptive Optimisation enabled

Space Reclaimation 09

Next I’m going to perform a Storage vMotion of the virtual machine onto the datastore DC02-NODR03.  Time to grab a cup of tea before we move on and run VMKFSTools to reclaim the space.

VMKFSTools

Now the Storage vMotion has finished, we need to run vmkfstools on the datastore to reclaim the space.  Jason Boche has an excellent blog post entitled ‘Storage: Starting Thin and Staying Thin with VAAI UNMAP‘ on the vmkfstools command.

On an ESXi Host that can see the datastore DC02-NODR02, I’m going to run the command ‘vmkfstools -y 90’

Space Reclaimation 10

Note in a production environment you would reclaim the space out of hours and use 60% of the available space

If we now check the volume DC02-NODR02 it’s consumed space is 0.46MB which is the VMFS file system

Space Reclaimation 11

 

Monitoring Space Reclamation

HP have introduced some extra performance statistics to enable space reclamation to be monitored which include:

  • IOPS Space Reclamation
  • Latency UNMAP

These can be accessed by added to the Performance Monitor window so that you can verify the effect of space reclamation on your StoreVirtual node.

Space Reclaimation 12

How To: Map HP StoreVirtual Volumes to Datastores

Problem Statement

You have created numerous datastores on your HP StoreVirtual of the same size and presented these to your ESXi Hosts.  However, you have since forgotten how the datastores map back to the volumes.

When you check the Runtime Name of your devices (Storage > Devices) to find out the LUN number, you see that each LUN has is ‘0’ as per the screenshot below.

LUN 0

This can be confirmed in HP StoreVirtual Centralised Management Console under Servers > Select Server > Volumes & Snapshots

LUN 0 HP SV

Not very helpful at all!

Resolution

Each datastores has a unique iSCSI Target string which can be used to identify how they are mapped to volumes.

To find out what they are select the Datastore > Properties > Manage Paths

Device Properties

At the bottom we can see the Target, this shows tells us the following details:

  • DC02-MG01
    • Denotes the Management Group the volume is in
  • 39 is the hexadecimal representation of 27 which is the VMware NAA (thanks to Jonathan Reid for this information)
    • Denotes the unique target identifier for the volume
  • DC01-DR01SRM
    • Denotes the volume name on the HP StoreVitual

Target Name

So we now know this datastore corresponds to the volume called DC01-DR01SRM in Management Group DC02-MG01.

Lessons Learnt: HP StoreVirtual P4500 10 GbE Upgrade & Virtual Connect

Purpose

The purpose of this blog post is to give you an insight into some of the quirky behaviour that I experienced during an upgrade of HP infrastructure, specifically in relation to HP StoreVitual 4500 and Virtual Connect.

Background

Existing HP infrastructure exists across a campus which has recently been upgraded to redundant 10Gbps links.

Site A contains:

  • 2 x HP Lefthand P4500 (before upgrade to LeftHand OS 11.5)
  • 1 x C7000 Blade Chassis with HP BL460c G7 blades

Site B contains:

  • 2 x HP Lefthand P4500 (before upgrade to LeftHand OS 11.5)
  • 1 x C3000 Blade Chassis with HP BL460c G6 blades
    • C3000 Blade Chassis to be disposed off

Site C contains:

  • HP FailoverManager for Lefthand

The underlying hypervisor is vSphere 4.1 which is to be upgraded once the hardware is in situ.

Design

The design was quite straight forward, to meet the customer requirements, we needed to:

  • Provide a 10 Gbps Core network using redundant HP5820 in am IRF stack
  • Introduce a vSphere Metro Storage Cluster on vSphere 5.5 U1
    • Ability to run workloads at either location
    • Provide operational simplicity
  • Introduce an additional C7000 Blade Chassis
  • Introduce HP BL460c Gen8 Blades for new
  • Introduce a performance tier for StoreVirtual using 4335
  • Introduce an archive tier for StoreVirtual using 4530
  • Upgrade existing P4500 to 10GbE

A logical overview of the solution is shown below.

Blog Post

Pre-Requisites

As part of the pre-requisite work the HP firmware had been upgraded as follows:

All new components had been upgraded to the same firmware and software levels.

Upgrade Purpose

The purpose of upgrade was to introduce/change the following items before vSphere was upgraded to 5.5 U1

  • HP 5820 Core
    • Change configuration to enable ESXi 4.1 Port Groups to be responsible for VLAN tagging
  • P4500 10GbE Cards
    • Existing 1GbE Cards to be used for iSCSi Management
    • New 10GbE Cards to be used for iSCSI Storage Traffic
  • Virtual Connect
    • Change configuration to enable ESXi 4.1 Port Groups to be responsible for VLAN tagging
  • vSphere
    • Update Port Groups so that ESXi is responsible for adding VLAN Headers

Lessons Learnt – Virtual Connect

On the new C7000 Chassis with HP BL460c Gen 8 Blades, Virtual Connect was used to logically separate bandwidth for four different networks with each containing traffic for a single subnet.  A VLAN tag was assigned to each subnet allowing ESXi 4.1 to be apply the VLAN headers.

From the ESXi DCUI we were unable to ping from VMkernel Management network to the HP5820 which was acting as the default gateway.  However placing a laptop into an ‘access port’ on the same VMkernel Management VLAN we could ping the default gateway on the  HP5820.

After some troubleshooting we found that the issue was with Virtual Connect, if you define a network as a ‘single network’ with a VLAN tag assigned to it, Virtual Connect very kindly removes the VLAN header.

Resolution: Select Multiple Networks rather than a Single Network

The next issue we came across was Virtual Connect on the existing C7000 with HP BL460c G7 Blades.  Virtual Connect would accept the changes to Shared Uplink Set and Server Profiles so that we were now using ‘Multiple Networks’ with VLAN tag’s however we couldn’t ping the default gateway on the HP5820 from the ESXi DCUI.

Again, after some troubleshooting we discovered that Virtual Connect allows you to make changes to existing networks from ‘Single’ to ‘Multiple Networks’ with the HP BL460c G7 Blades running, but these changes don’t take effect until after a reboot.

Resolution: After any Virtual Connect change reboot blade

 Lessons Learnt – HP P4500

When you upgrade the HP P4500 to 10GbE you add an additional 4GB RAM and the 10GbE card, fairly straight forward.  After the hardware installation we wanted to utilise the network cards as follows:

  • 2 x 10GbE in an Adaptive Load Balance bond for iSCSI Storage Traffic
  • 1 x 1GbE for iSCSI Management Traffic

To do this we need to break the existing Adaptive Load Balance bond on the 1GbE connections.  After breaking the bond we had no link lights on the HP5820 or P4500.  We started to scratch our heads and jumped on the KVM to see what had happened.  We soon discovered that when the bond is broken, the network interfaces are placed into ‘disabled’ state.

Resolution: Maintain KVM or iLO access when breaking an ALB bond

Next we placed an IP Address on the 1GbE interface so that we could continue to manage the array.  We enabled flow control on the 10GbE interfaces and also jumbo frames as this was part of the design and then finally created the ALB bond with the 10GbE interfaces having the default gateway applied to them.  We ran some simple ping tests to the Management IP Address which resulted in a ping response, however the 10GbE would not respond.  Not exactly where we wanted to be!

We broke the ALB bond on the 10GbE and we could ping the 1GbE interface and 10GbE interfaces.  This then lead to the discovery that you cannot use the 1GbE interfaces with 10GbE interfaces on the same subnet.  We didn’t have time to test the 1GbE interfaces on a different subnet to see if this configuration would work.

Resolution: Disable the 1GbE interfaces

Now we had 10GbE interfaces working using Adaptive Load Balacing, it was time to ensure that flow control was enabled.  We saw some very strange results either it was on some interfaces and off others!  A quick check of the HP5820 and flow control was enabled on the correct ports.  We carried out a number of test but still couldn’t get flow control to show as enabled:

  • Broke the ALB bod to manually enabled flow control
  • Shut down the HP5820 interfaces and enabled them
  • Restarted the HP P4500

We found the resolution by mistake.  On one of the nodes we performed a shutdown then power on rather than a restart, flow control was enabled.  It appears that it is only on the power on operation the P4500 negotiate flow control settings with the upstream switch.

Resolution: After enabling flow control, shutdown and power on P4500

New: HP 3PAR StoreServ File Persona

For me, this is one of the best announcements at HP Discover, 3PAR StoreServ entering the world of ‘file’ level storage natively, removing the requirement for a StoreEasy gateway.

File Storage

 

Features

HP have confirmed that the following key features will work with ‘file’ level storage:

  • Thin Provisioning
  • Zero Detect
  • Adaptive and Dynamic Optimization
  • Adaptive Flash Cash (for reads)
  • Synchronous & Asynchronous replication via Remote Copy
  • Symantec & McAfee Anti Virus integration
  • Data at Rest Encryption*

*Note this is an optional license

3PAR Dashboard

Within the 3PAR Dashboard is a section called ‘File Persona’ which will enable the management of file shares, virtual file servers and persona configuration.

File Persona

Support

The following features will be supported at the initial release:

  • SMB 1.0, 2.0 and 3.0
  • NFSv3 and v4
  • Active Directory, LDAP and local user Authentication
  • DFS Namespace including Microsoft MMC support

Licence

To use ‘file’ level storage an extra license is required.  More on this to come when updates are released.

Arrays

To support ‘file persona’ the array needs to have extra cache, these come from the ‘C’ type models.  This essentially means that you need to swop out your existing controllers or purchase a new array.

More information on the ‘C’ arrays can be found over at Patrick Terlisten’s blog vCloudnine.de

New: HP 3PAR StoreServ Management Console ‘SSMC’

Those of you who have used the 3PAR Inform Management Console know that it wasn’t exactly the best, screen refreshes taking a while, being logged out of StoreServ’s with the connection still showing as open.

HP have decided to give the 3PAR Inform Management Console a ‘facelift’, step forward HP’s new 3PAR StoreServ Management Console AKA ‘SSMC’.

What’s New

  • New dashboard with the same look and feel as OneView
  • Management of file and block from same interface
  • Inbuilt System Reporter
  • Web based
  • Smart Search across all objects

So what does it look like? Well below are a couple of screenshots to wet your appetite.

3PAR Dashboard

3PAR Dashboard

 3PAR SmartSearch

3PAR SmartSearch

HP is moving towards a similar management experience for storage, servers and networking.  Something in my opinion has been long overdue.

Compatibility

SSMC will be compatible with 3PAR Inform OS 3.1.3 or above.

License

No licenses are required this will be a free download

Supported Operating Systems

SSMC will be available as a Windows based install on Windows Server 2008 R2, 2012 or 2012 R2.  It will also be available in certain flavours of Linux.

Final Thoughts

Over the long term I expect that the SSMC will be integrated into the VSP for 3PAR as this will give HP the ability to control software updates to the SSMC in a controlled fashion.

The Service Processor (SPOCK) is still a separate entity again, I expect this will be integrated further as the dot releases become available.