Part 2 – How To Install & Configure HP StoreVirtual VSA On vSphere 5.1

Great news, it’s time to fire the HP StoreVirtual VSA’s up!  Excellent, once they have booted, we need to login and configure the IP address of each SAN.

To do this go onto the console screen and type start and press enter

Press enter to login

TOP TIP, to navigate around use tab not the arrow keys

Tab down to Network TCP/IP Settings and press enter

Tab to eth0 and press enter

Type in your hostname, in my case it’s SATAVSA01.vmfocus.local then your IP information

 Once done, go over to OK and then log out.

Rinse and repeat for eth1, obviously giving it a different IP Address!

Then continue for anymore HP StoreVirtual VSA’s you have in your environment.

In my lab, I have four in total, which are:

  • SATAVSA01
  • SATAVSA02
  • SSDVSA01
  • SSDVSA02

In fact, let’s show you a picture along with my IP address schema.

Now you are probably thinking that’s great Craig, but I’m not seeing how I do my SAN configuration? Well for that we need to use the HP P4000 Centralized Management Console.

HP P4000 Centralized Management Console

The HP P4000 Centralized Management Console or CMC as it will now be known, is where all the magic happens! OK well not magic, it’s where we configure all the settings for the HP StoreVirtual VSA.

In the previous blog post Part 1 – How To Install & Configure HP StoreVirtual VSA On vSphere 5.1 we downloaded the HP StoreVirtual VSA software.  In the extracted package we also have the CMC which we need to install to be able to manage the VSA’s.

Jump onto the laptop/server you want to install the CMC onto and navigate to the folder which contains CMC_Installer\CMC_9.5.00.1215_Installer and run this.

I tend to install the CMC onto the server running vCenter, just makes life easier having everything in one place.

It takes a short while to initialize, but we should see this screen soon.

Hit OK, then follow the onscreen prompts, you know the usual next, accept EULA next, OK.

Awesome, so hopefully, you should see the CMC installing.

Launch the CMC and voila we have a screen full of err nothing!

It actually makes sense, as we need to tell the CMC to find the VSA’s we installed via there IP address’s. To do this, click Add and enter your IP Address.  Mine are:

  • 10.37.10.11
  • 10.37.10.13
  • 10.37.10.15
  • 10.37.10.17

If all goes well, you should see your VSA’s being populated.

Click on Add, and hold on a minute, where have they gone? Don’t worry you can see them under Available Systems on the left hand side.

Let’s crack on and start configuring.  Select the Getting Start from the left hand panel and choose 2. Management Groups, Clusters and Volumes Wizard:

Hit next, and we want to create a New Management Group. But what is a ‘management group’ well it’s a logical grouping of VSA’s which are clustered to provide scalability and resilience.  Let’s say we had one SAN with RAID 10 which is a common deployment.  SAN’s are built for resilience e.g. dual PSU’s, dual disk controllers, multiple NIC’s per controller.  If you loose a disk controller, then even though the SAN continues to work you get a massive performance hit as the SAN will go ‘aha’ I don’t have a redundant disk controller and therefore I will turn caching off and every write will be written directly to disk.

If we have  two VSA’s or P4000 within a Management Group that are Clustered running Network RAID 10 we can avoid this situation.  Pretty neat eh?

The first thing we want to do is create a new Management Group and click Next.

Then give the Management Group a name, for me, it’s going to be SATAMG01 as I’m going to have two Management Groups, one for SATA and one for SSD.  Then select the VSA’s which will held by the Management Group.  I have chosen SATAVSA01 and SATAVSA02.  We now get an additional box appear with a warning

‘to continue without installing a FOM, select the checkbox below acknowledging that a FOM is required to provide the highest level of data availability for a 2 storage system management group configuration. Then click next’.

Crikey that’s a bit of warning, what does it mean? Well well essentially it’s about quorum, a term that I’m sure alot of you are familiar with when working with Windows clusters.  Each VSA run’s whats known as a ‘manager’ which is really a vote.  When we have two VSA’s we have two votes, which is a tie.  Let’s say that one VSA has an issue and goes down, how does the the remaining VSA know that? Well it doesn’t, it could be that both VSA’s are up and they have lost’s the network between them.  This then result’s in split brain scenario.  The good news is if this occurs then both VSA’s go into a ‘holding state’ with no LUN access until either the original VSA comes back online or someone from IT performs manual intervention.

Don’t worry we are going to introduce a Failover Manager in a third logical site, I will go over the pre requisites for this in an upcoming blog post.

On the next page we need to enter an ‘Administrative User’ which will propagate down to the VSA’s so that if we try and access them, these are the credentials we need to supply.  Next pop in the details of an NTP server or manually set the time.  My recommendation is always to go for an NTP server preferably one of your DC’s so that your never more than 15 minutes out of sync which can cause dramas!

Onto DNS information now, pop in your DNS Domain Name, DNS Suffix and DNS Server

Onto Email Server settings now, enter in your email Server IP, Sender Address and Recipient Address

We now need to ‘Create a Cluster’ which is two or more VSA’s working in unison providing a highly available and resilient storage infrastructure.  In this case we are going to select Standard Cluster and click next.

Give the Cluster a name, I’m going to roll with SATACL01 and click Next.

This is where things start to get interesting, we now need to ‘Assign a Virtual IP’ to the cluster SATACL01. What does this do? Well all communication for the VSA’s goes via the Virtual IP Address allowing every block of information to be written to both VSA’s simultaneously.  How cool?

Click Add and then Next.

We are now in a position to Create a Volume.  Enter the name,  in my case SATAVOL01 and choose a Data Protection Level.  The choices are Network RAID 0, if we use this then we have no protection, so best to select Network RAID-10 (2-Way-Mirror) and enter your Reported Size.

I have always thought that the Reported Size is quite strange, as why would you want to reported size which is greater than your physical space available? Essentially it’s a poor relation to thin provisioning so the ‘storage team’ can say hey ‘VMware team’ look we have created you a 10TB Volume when in fact they only have 5TB of actual space.

Select either Full or Thin Provisioning and click Finish.  Time to make a cup of tea as this is going to take a while.  Once done you should end up with a screen like this.

Note, you will get a warning about licensing, this is expected.  We are ‘cooking on gas’.  Now it’s time to present the volumes to VMware.

vSphere iSCSI Configuration

For the iSCSI configuration we are going to head into VMware, to grab the initiator FQDN’s.  For completeness, I’m going to cover this as well!

Head into vCenter then onto your ESXi Host, select the Configuration Tab, then select Storage Adapters followed by Add and choose ‘Add Software iSCSI Adapter’

Now that’s done we need to bind out VMKernel Port Group to iSCSI.  To do this click your new iSCSI Software Adapter and click Properties.  This essentially says ‘hey I’m going to use this special VMKernel port for iSCSI traffic’.

Select the Network Configuration tab and click Add

Then select your iSCSI Port Group and click OK

Hopefully, once done it looks a bit like this.

Next we need to enter in the IP Address’s of the VSA Virtual IP Address we want to connect to under the Dynamic Discovery Tab.  Again it should resemble something like this.

Last bit of work before we head back over to the CMC, is that we need to grab the vSphere iSCSI Initiator FQDN.  Good news this is the page we find ourselves at.  So get make a note of what yours are.

Mine are:

  • ESXi02 – iqn.1998-01.com.vmware:ESXi02-0f9ca9cc
  • ESXi03 – iqn.1998-01.com.vmware:ESXi03-36a2ee1c

CMC iSCSI Configuration


We are on the final hurdle! Expand your Management Group then select Servers, click Tasks > New Server

Complete the details and paste in the Initiator Node Name.  Rinse and repeat for the servers you want to present your volumes too.

TOP TIP, I recommend you set up a Server Cluster, this is feature of most SAN’s.  It enables you to group common ‘hosts’ together so that rather than having to present a volume to each server/host individually, you present it to the cluster saving you the administrator time (which I’m all for, as we can fit in more cups of tea).

Back to Tasks then Select New Server Cluster and enter the Cluster Name and Description. Once done it should resemble this.  I know great imagination Craig ‘ESXiCL01’

Last of all we need to ‘assign’ the cluster ESXiCL)1 to access the Volumes.  To do this go to Volumes and Snapshots right click the volume you want to present to your server and click ‘Assign and Unassign Server’.  Place a tick in Assigned.

A quick jump over to vCenter and a quick ‘Rescan All’ of our Storage Adapters should reveal.

Boom, there we have it! In the next blog post we can crack on and install the Failover Manager and perform some testing!

Part 1 – How To Install & Configure HP StoreVirtual VSA On vSphere 5.1

The HP StoreVirtual VSA is sheer awesomeness.  It’s going to form the basis of all my storage for my home lab.

Before we move on, let’s examine why it’s so cool.

  • Runs as a VM on either Hyper V, ESXi or VMware Player
  • Use existing HP ProLiant or C Class Blade hardware to create a virtual iSCSI SAN.
  • Thin Provisioning
  • Storage Clustering
  • Wide Strip RAID 5, 6 and 10
  • Network RAID 0, 5, 6, 10, 10+1 and 10+2
  • Automatic SAN Failover using Failover Manager
  • A Synchronous Replication including bandwidth throttling

That’s a large amount of features which is perfect for any lab environment, it will give me the ability to create a vSphere Metro Storage Cluster, deploy Site Recovery Manager as it has a Storage Replication Adapter and is featured on the SRM Hardware Compatibility List

The hardware requirements to run the HP StoreVirtual VSA are:

  • 1 x 2GHz CPU (reserved)
  • 3GB RAM  (reserved)
  • Gigabit Switch

Lab Storage Architecture

So what will be the architecture for my VSA? Well the physical server is a HP ProLiant ML115 G5 with the following specifications:

  • 1 x AMD Quad Core CPU 2.2GHz
  • 5 x 1 GB NIC’s
  • 2 x 120GB SSD
  • 2 x 500FB 7.2K SATA

The HP ProLiant ML115 G5 boots ESXi5 from USB.  Screenshot below from VMware.

You may be questioning, if I’m going to use hardware RAID on the HP ML115 G5? Well the simple answer is no.  I guess you are now thinking you are crazy, why would you do that? Well there is method to my madness.

Step 1 We have four hard drives in total, let’s call them SATAHDD01, SATAHDD02, SSDHDD01 and SSDHDD02.

Step 2 Create a Datastore called SATAiSCSI01 on SATAHDD01 using all the available space and install the SATAVSA01 onto this.

Step 3 Create a Datastore called SSDiSCSI01 on SSDHDD01 using all the available space and install the SSDVSA01 onto this.

Step 4 Create a Datastore called SATAiSCSI02 on SATAHDD02 using all the available space and install the SATAVSA02 onto this.

Step 5 Create a Datastore called SSDiSCSI02 on SSDHDD02 using all the available space and install the SSDVSA02 onto this.

Step 6 We configure SATAVSA01 and SATAVSA02 in Network RAID 10 giving us a highly available SATA clustered solution.

Step 7 We configure SSDVSA01 and SSDVSA02 in Network RAID 10 giving us a highly available SSD clustered solution.

This probably sounds a little complicated, I think in this situation a diagram is in order!

Cool, so without further delay, let’s start installing and configuring.

Installing HP StoreVirtual VSA

We need to download the software from here. You will need to register for a HP Passport Sign in to obtain the software which is a quick and easy process.

Once we get to the download page you will get three choices, the one we want to select is ‘HP P4000 VSA 9.5 Full Evaluation SW for VMware ESX required ESX servers (AX696-10536.zip)

Time to stick the kettle on for a fresh brew, unless you have a faster broadband connection than me!

Once downloaded, extract the files to a location on your laptop/desktop and fire up vSphere Client and connect to vCenter or your ESXi Host.

You would think that’s it, time to deploy the OVF, nope. We need to browse into the extracted files until we get to HP_P4000_VSA_9.5_Full_Evaluation_SW_for_Vmware_ESX_requires_ESX_servers_AX696-10536\Virtual_SAN_Appliance_Trial\Virtual_SAN_Appliance and click autorun.exe

This will launch the a further self extractor so that you can either deploy the HP StoreVirtual VSA via an OVF or connect directlty to an ESXi Host or vCenter using HP’s software.

Accept the License Agreement > Select Install VSA for VMware ESX Server and choose a further directory to extract the files too.

Once done, you will get a CMD prompt asking if you want to run the Virtual SAN Appliance installer for ESX?  In this instance we are going to close down this dialog box as if we use the GUI to connect to an ESXi 5.1 host it won’t pass validation.

Instead we are going to deploy it as an OVF.

So first things first, we need to create Datastore called SATAiSCSI01 which will contain the HP StoreVirtual VSA OVF virtual HDD.  I’m assuming you know how to do this so we will move onto deploying the OVF.  To do this click File from the vSphere Client > Deploy OVF Template.

Browse to the location ending in \VSA_OVF_9.5.00.1215\VSA.ovf and click Next

Click Next on the OVF Template Details screen and Accept the EULA followed by Next.  Give the OVF a Name in this case HPVSA01 and click Next.  I would recommend deploying the Disk Format as Thick Provision Eager Zeroed and clicking Next.  Next up choose a Network Mapping and click Finish.

Top Tip, don’t worry if you cannot select the correct network mapping during deployment. Edit the VM settings and change it manually before powering it on.

If all is going well you should see a ‘Deploying SATAVSA01’ pop up box.

On my physical vSphere 5.1 host, I have five NIC’s.  In this configuration we are going to assign one physical NIC to the management network and four physical NIC’s to the iSCSI network.  Hang on a minute Craig, why aren’t you using two physical NIC’s for the management network? Well first of all this is my home lab and I can easily connect to the HP Central Management Server using the iSCSI Port Group on a VM or if I create an Access Control List on my HP v1910 I can access SATAVSA01, SATAVSA02, SSDVSA01 and SSDVSA02 from the Management network .  Therefore I have chosen to give resiliency and bandwidth to the HP StoreVirtual VSA iSCSI connections.

This actually ties in quite well with the HP StoreVirtual best practice white paper which states you should use two vNIC’s per VSA.  So when we are finished we will have:

  • SATAVSA01 with 2 x vNIC’s
  • SATAVSA02 with 2 x vNICs
  • SSDVSA01 with 2 x vNICs
  • SSDVSA02 with 2 x vNICs

vSphere will automatically load balance the VM’s (SATAVSA01, SATAVSA02, SSDVSA01 and SSDVSA02) onto different physical NIC’s.  If you want to check this you can use ESXTOP which I covered in this blog post.

Cool, so we now have the HP StoreVirtual VSA with some virtual NIC’s, but we have no hard disk capacity.  We are going to edit SATAVSA01 settings and click add Hard Disk > Create A New Virtual Disk > Next .

We now have a choice on the Disk Provisioning, which one do we go for?

Thick Provision Lazy Zeroed Space is allocated by ESXi however the zero’s are not written to the underlying hard disk until that space is required to be used.  Meaning that we have an overhead, do we want this for our iSCSI SAN?

Thick Provision Eager Zeroed Space is allocated by ESXi and all zero’s are written.  The best choice!

Thin Provision Limited space is allocated by ESXi and will automatically inflate when needed,  again zero’s are not written to the underlying hard disk until that space is required to be used.  Meaning that we have an overhead, do we want this for our iSCSI SAN?

In my case I have gone with the following settings.

On the Advanced Options screen we need to change the Virtual Device Node to SCSI (1:0) otherwise the hard drive space won’t be seen by the HP StoreVirtual VSA.

Click finish, this time you will definitely be able to make a brew whilst we wait for vSphere to provision the hard disk.

Lastly, we need to repeat this process for SATAVSA02, SSDVSA01 and SSDVSA02.

In the next blog post I promise we will start to power things on!

vSphere 5.1 – My Take On What’s New/Key Features

With the release of vSphere 5.1, it’s been tough keeping up with all the tweets and information from VMworld 2012 San Francisco.

With the plethora of data, I thought it would be handy to blog about what the key features that will have the biggest impact on my every day life.

Licensing

vRAM – It’s gone, licensing is back to per physical processor.

vSphere Essentials Plus – Now includes vSphere Storage Appliance and vSphere Replication.

vSphere Standard  – Now includes vSphere Storage Appliance, vSphere Replication, Fault Tolerance, Storage vMotion and vCentre Operations Manager Advanced.

Beneath The Hood

Monster Virtual Machines

Virtual Machines, can now have the following hardware features:

1TB RAM
64 vCPUs
> 1 Million IOPS per VM

Wonder if I will continue to have those we need a physical SQL server conversation?

This is made possible by Virtual Machine Format 9.

vMotion

vMotion no longer requires shared storage.  This has been achieved by combining vMotion and Storage vMotion into a single operation.  So when a VM is moved, it moves the memory, processing threads and disk over the network to it’s target.

Now what is really, cool it maintains the same performance levels as the older vMotion with shared storage!

Note, I recommend that you use multiple NIC’s for vMotion as per my post High Availability for vMotion

vSphere Replication

Enables virtual machine data to be replicated over LAN and WAN.  Previously to achieve 15 minutes  a-synchronous replication you need sub 2 ms latency.

vSphere Replication integrates with Microsoft’s Volume Shadow Copy (VSS) ensuring that applications such as Exchange and SQL will be in a consistent state if DR was implemented.

vSphere Replication can be used for up to 500 virtual machines.

The initial seed can be done offline and taken to the destination to save bandwidth and time.

VMware Tools

No more downtime to upgrade VMware Tools.

vSphere Web Client

This is going to be the tool for administrating vCentre.  Some pretty cool features like vCenter Inventory Tagging, which means you can apply meta data to items and then such on them e.g. group applications together for a particular department or vendor.

We now have the ability to customise the web client to give it ‘our look and feel’.

Always getting called away when you are half way through adding a vNIC to a VM, well we can now pause this and it appears in ‘work in progress’ so we never forgot to complete an action.

For the pub quiz fans, you can have 300 concurrent Web Client users.

Link Aggregation Control Protocol Support

Used to ‘bind’ several physical connections together for increased bandwidth and link failure (think Cisco Port Channel Groups), this is now a supported feature in vSphere 5.

Memory Overhead Reduction

Every task undertaken by vSphere has an overhead, whether this is a vCPU or a vNIC, it requires some attached memory.  A new feature allows upto 1GB of memory back from a vSphere host which is under pressure.

Latency Sensitivity Setting

vSphere 5.1 makes it easier to support low latency applications (something which I have encountered with Microsoft Dynamics AX).  The ability to ‘tweek’ latency for an individual VM is great.

Storage

We now have 16Gb Fiber Channel support and iSCSI Storage Driver has been upgraded. Some very impressive increases in performance.

Thin provisioning has always been an issue unless your array supported T10 UNMAP.  With vSphere 5.1 a new virtual disk has been introduced the ‘sparse virtual disk’ AKA SE spare disk.  It’s major function is to reclaim previously used space in the guest OS.  This feature alone is worth the upgrade.

What is VAAI?

This is more of a post for myself going over VAAI before I take my VCP 5 exam soon, so I wanted to get some pixels on the screen about VAAI.

VAAI stands for vSphere Storage API’s for Array Integration.  It has been around since vSphere 4.1 and is used to ‘pass’ storage related functions to the array rather than being performed by ESXi.

Some of the benefits from using VAAI are:

Hardware Accelerated Full Copy tasks such as power on VM’s or cloning VM’s are more efficient.

Hardware Accelerated Block Zeroing if you create a disk using ‘Thick Provisioned’ Lazy Zeroed, then the array will take the responsibility to write the zero’s instead of ESXi.

Thin Provisioning perhaps the most important one.  ESXi5 know’s that a LUN has been thin provisioned and can reclaim dead space.  Why is this important? Well imagine you put a 4GB ISO file onto a production VM to install a third party piece of software. After the software has been installed, you delete the ISO file, but how does the array know that the 4GB of space can be reclaimed? The operating system doesn’t tell ESXi5 or the array to reclaim the space as it’s no longer used, instead it comes from the T10 UNMAP command.

How do we know if our SAN is VAAI supported? If you go to Storage > Devices and look at the Hardware Acceleration Column, you are looking for ‘supported’.

We commonly use HP SAN’s and different levels of SAN Management Software will have VAAI support for example HP P4000, need SANiQ version 9 or above to support VAAI (9.5 is out).

Naturally, as we are all IT professionals we regularly update the firmware on all of our devices!