Part 1 – How To Install & Configure HP StoreVirtual VSA On vSphere 5.1

The HP StoreVirtual VSA is sheer awesomeness.  It’s going to form the basis of all my storage for my home lab.

Before we move on, let’s examine why it’s so cool.

  • Runs as a VM on either Hyper V, ESXi or VMware Player
  • Use existing HP ProLiant or C Class Blade hardware to create a virtual iSCSI SAN.
  • Thin Provisioning
  • Storage Clustering
  • Wide Strip RAID 5, 6 and 10
  • Network RAID 0, 5, 6, 10, 10+1 and 10+2
  • Automatic SAN Failover using Failover Manager
  • A Synchronous Replication including bandwidth throttling

That’s a large amount of features which is perfect for any lab environment, it will give me the ability to create a vSphere Metro Storage Cluster, deploy Site Recovery Manager as it has a Storage Replication Adapter and is featured on the SRM Hardware Compatibility List

The hardware requirements to run the HP StoreVirtual VSA are:

  • 1 x 2GHz CPU (reserved)
  • 3GB RAM  (reserved)
  • Gigabit Switch

Lab Storage Architecture

So what will be the architecture for my VSA? Well the physical server is a HP ProLiant ML115 G5 with the following specifications:

  • 1 x AMD Quad Core CPU 2.2GHz
  • 5 x 1 GB NIC’s
  • 2 x 120GB SSD
  • 2 x 500FB 7.2K SATA

The HP ProLiant ML115 G5 boots ESXi5 from USB.  Screenshot below from VMware.

You may be questioning, if I’m going to use hardware RAID on the HP ML115 G5? Well the simple answer is no.  I guess you are now thinking you are crazy, why would you do that? Well there is method to my madness.

Step 1 We have four hard drives in total, let’s call them SATAHDD01, SATAHDD02, SSDHDD01 and SSDHDD02.

Step 2 Create a Datastore called SATAiSCSI01 on SATAHDD01 using all the available space and install the SATAVSA01 onto this.

Step 3 Create a Datastore called SSDiSCSI01 on SSDHDD01 using all the available space and install the SSDVSA01 onto this.

Step 4 Create a Datastore called SATAiSCSI02 on SATAHDD02 using all the available space and install the SATAVSA02 onto this.

Step 5 Create a Datastore called SSDiSCSI02 on SSDHDD02 using all the available space and install the SSDVSA02 onto this.

Step 6 We configure SATAVSA01 and SATAVSA02 in Network RAID 10 giving us a highly available SATA clustered solution.

Step 7 We configure SSDVSA01 and SSDVSA02 in Network RAID 10 giving us a highly available SSD clustered solution.

This probably sounds a little complicated, I think in this situation a diagram is in order!

Cool, so without further delay, let’s start installing and configuring.

Installing HP StoreVirtual VSA

We need to download the software from here. You will need to register for a HP Passport Sign in to obtain the software which is a quick and easy process.

Once we get to the download page you will get three choices, the one we want to select is ‘HP P4000 VSA 9.5 Full Evaluation SW for VMware ESX required ESX servers (AX696-10536.zip)

Time to stick the kettle on for a fresh brew, unless you have a faster broadband connection than me!

Once downloaded, extract the files to a location on your laptop/desktop and fire up vSphere Client and connect to vCenter or your ESXi Host.

You would think that’s it, time to deploy the OVF, nope. We need to browse into the extracted files until we get to HP_P4000_VSA_9.5_Full_Evaluation_SW_for_Vmware_ESX_requires_ESX_servers_AX696-10536Virtual_SAN_Appliance_TrialVirtual_SAN_Appliance and click autorun.exe

This will launch the a further self extractor so that you can either deploy the HP StoreVirtual VSA via an OVF or connect directlty to an ESXi Host or vCenter using HP’s software.

Accept the License Agreement > Select Install VSA for VMware ESX Server and choose a further directory to extract the files too.

Once done, you will get a CMD prompt asking if you want to run the Virtual SAN Appliance installer for ESX?  In this instance we are going to close down this dialog box as if we use the GUI to connect to an ESXi 5.1 host it won’t pass validation.

Instead we are going to deploy it as an OVF.

So first things first, we need to create Datastore called SATAiSCSI01 which will contain the HP StoreVirtual VSA OVF virtual HDD.  I’m assuming you know how to do this so we will move onto deploying the OVF.  To do this click File from the vSphere Client > Deploy OVF Template.

Browse to the location ending in VSA_OVF_9.5.00.1215VSA.ovf and click Next

Click Next on the OVF Template Details screen and Accept the EULA followed by Next.  Give the OVF a Name in this case HPVSA01 and click Next.  I would recommend deploying the Disk Format as Thick Provision Eager Zeroed and clicking Next.  Next up choose a Network Mapping and click Finish.

Top Tip, don’t worry if you cannot select the correct network mapping during deployment. Edit the VM settings and change it manually before powering it on.

If all is going well you should see a ‘Deploying SATAVSA01’ pop up box.

On my physical vSphere 5.1 host, I have five NIC’s.  In this configuration we are going to assign one physical NIC to the management network and four physical NIC’s to the iSCSI network.  Hang on a minute Craig, why aren’t you using two physical NIC’s for the management network? Well first of all this is my home lab and I can easily connect to the HP Central Management Server using the iSCSI Port Group on a VM or if I create an Access Control List on my HP v1910 I can access SATAVSA01, SATAVSA02, SSDVSA01 and SSDVSA02 from the Management network .  Therefore I have chosen to give resiliency and bandwidth to the HP StoreVirtual VSA iSCSI connections.

This actually ties in quite well with the HP StoreVirtual best practice white paper which states you should use two vNIC’s per VSA.  So when we are finished we will have:

  • SATAVSA01 with 2 x vNIC’s
  • SATAVSA02 with 2 x vNICs
  • SSDVSA01 with 2 x vNICs
  • SSDVSA02 with 2 x vNICs

vSphere will automatically load balance the VM’s (SATAVSA01, SATAVSA02, SSDVSA01 and SSDVSA02) onto different physical NIC’s.  If you want to check this you can use ESXTOP which I covered in this blog post.

Cool, so we now have the HP StoreVirtual VSA with some virtual NIC’s, but we have no hard disk capacity.  We are going to edit SATAVSA01 settings and click add Hard Disk > Create A New Virtual Disk > Next .

We now have a choice on the Disk Provisioning, which one do we go for?

Thick Provision Lazy Zeroed Space is allocated by ESXi however the zero’s are not written to the underlying hard disk until that space is required to be used.  Meaning that we have an overhead, do we want this for our iSCSI SAN?

Thick Provision Eager Zeroed Space is allocated by ESXi and all zero’s are written.  The best choice!

Thin Provision Limited space is allocated by ESXi and will automatically inflate when needed,  again zero’s are not written to the underlying hard disk until that space is required to be used.  Meaning that we have an overhead, do we want this for our iSCSI SAN?

In my case I have gone with the following settings.

On the Advanced Options screen we need to change the Virtual Device Node to SCSI (1:0) otherwise the hard drive space won’t be seen by the HP StoreVirtual VSA.

Click finish, this time you will definitely be able to make a brew whilst we wait for vSphere to provision the hard disk.

Lastly, we need to repeat this process for SATAVSA02, SSDVSA01 and SSDVSA02.

In the next blog post I promise we will start to power things on!

29 thoughts on “Part 1 – How To Install & Configure HP StoreVirtual VSA On vSphere 5.1

  1. After reading your article and HP StoreVirtual VSA official documentation, especially HP StoreVirtual SAN User Guide, I must question your way of not using hardware RAID and creating one VSA on each physical disk.

    First, StoreVirtual SAN User Guide says “HP recommends using both disk RAID and Network RAID to insure high availability: Configure RAID 1+0, RAID 5, or RAID 6 within each storage system to ensure data redundancy”
    Secondly, each VSA license costs about 2400 EUR which in my opinion makes it quite an expensive alternative for using real hardware level RAID controller.

    PS. At least the latest StoreVirtual VSA setup offers a command line interface option which works nicely with vSphere 5.1. I haven’t tried GUI option, it may work as well.

    1. Hi Cristian, great question and thanks for reading the blog. The answer is purely down to the fact that I don’t have hardware RAID in my lab. In a production environment you would utilse hardware RAID.

  2. In my situation, we have 3 VSAs, and I’ve assigned 2 LUNs to each, how do we present this on CMC? The reason we have 2 LUNs is because 1 LUN is SAS and the other is SATA.

    1. Create you RAID groups on your local storage. In ESXi add a Datastore for each RAID group, then add a VMDK from each Datastore to the VSA.

    1. Local RAID on each ESXi Host, then install HP StoreVirtual. Create Datastore from the Local RAID, add VMDK to HP StoreVirtual and then use Network RAID 10. With this in place, you will have vMotion between hosts.

      1. Thanks. Could you specify:
        1. “add VMDK to HP StoreVirtual and then use Network RAID 10″ You mean VMDKs of VSA virtual machines from each host, isn’t it? – You add VMDK’s to the VSA to present as iSCSI storage.
        2. How do you think it’s better to use this Network Storage (http://www.qnap.com/en/?lang=en&sn=822&c=351&sc=513&t=520&n=3344) in my infrastucture? Bit of broad question, I don’t know what your requirements are. The HP StoreVirtual is an enterprise SAN, it can replicate volumes synchronously, take snapshots, thin provision, scale etc, it can be clustered for availability with Network RAID 10. So you need to decide if a particular SAN/NAS is right for your environment starting with the business requirements.

  3. Thanks, your blog has been very helpful but I am encountering persistent problems. I have 2 groups each with 2 VSA, ESXi5.1, VSA version 10. After a seemingly random time one of the groups will fail and show latency >60. My schedules will then fall apart and I get a couple thousand emails with errors. Any idea why this happens?

    1. Hi, I would make sure the StoreVirtual’s are fully patched with 10.5. After this I would check your networking as this is most likely the cause, check your physical NIC firmware on ESXi as this can cause lots of random issues if it’s out of date. Also make sure your StoreVitrual NIC’s are using VMXNET3.

      1. Thanks, all patched already. I just installed some new NIC VIBs. As for VMXNET3, I previously switched to Flexible as I thought maybe they were the cause. Changed back to VMXNET3, hopefully this will help =D (I will report back)
        Also, do the second virtual NICs in each VSA actually do anything, is there a good reason to have them?

  4. After following all of this advise I am still unfortunately getting randomly occurring E00060100 EID_LATENCY_STATUS_EXCESSIVE – latency = 61.706, exceeds 60.000. CRITICAL.
    It seems to happen after being up for about a week or so, and at very quiet times. I have to restart the Management Group to fix. If anyone has any idea what could be causing this, or a way to figure out the cause I would be so grateful to hear it. I do not have flow control enabled, could this be an issue? Surely the VSA should be easier than P4000.

  5. Dear Craig,
    Thank you for your helpful article
    but I have a question here.
    how are we going to add a 450 GB HDD to SSDVSA01 and SSDVSA02 while they are on a data store with only 120 GB in size..
    Secondly, why are we using virtual node iscsi 1:0 for the added HDDs and will it be the same for the four VSA’s.
    Thank You

    1. Hi Ramy, thanks for reading. The 450GB HDD is for the SATAVSA01 and another 450GB HDD is for SATAVSA02.

      Using SCSI (1:0) is a requirement to add a secondary SCSI controller which HP StoreVirtual uses to present storage.

  6. Hi Craig, Read you blog and was very interested.
    We are also planning to move to hp vsa. Read articles that vsa has some write issues. Do you have that experience?
    Janto

    1. Hi Janto, I haven’t experienced any write issues. The CMC will be able to show you any write latency, this would then be down to the underlying hardware.

  7. Hi Craig, after long consideration we purchased StoreVirtual VSA setup in 2 HP boxes with 8 nic’s in it.I have tweaked on the network settings. The environment is not big, but very intensive and the performance is as expected.
    My question is the following:
    According to HP best practise you need to set iSCSI Port Binding. I have 1 SG300-28 switch in L2 mode where LAG is available and setup LAG for iSCSI.
    What is the difference or performance difference between iSCSI Port binding and LAG in SG300.

    Janto

    1. Hi Janto, bit of a late response.

      LAG you are relying on a load balancing algorithm between switch and the source and destination traffic to send traffic between network interfaces. Usually this is source and destination IP address, so once the connection is established it will use the same path over and over again. In certain situations you might find that all traffic goes over one interface depending on the outcome of the hash.

      iSCSI Port Binding is performed at the the vSphere level, the Native Multi Pathing Policy will send 1000 IOPS down route A and then the next 1000 IOPS down route B to effectively load balance traffic.

      I always go for simplicity unless their is a compelling reason not to. No LAG and iSCSI Port Binding.

  8. Thank you for the short explanation.

    This helped me a lot. The environment is already in production, but I need to go for simplicity. Thought the LAG was a good route in combination with the sg300. Port binding is not configured on the boxes.

    Janto

  9. Hi – I am trying to test this in a Hyper v set up however my Hyper V nodes are nested VMs on a single ESXi 5.1 host. I have set up all the pre requisites as i normally would for a physical set up but when i power on both of the VSA i can not see both VSA I can only the VSA on the local machine where i have the CMC installed. Any tips?

  10. Hi all.

    I have limited SAN experiance and want to upgrade a client site who has vSpheres hosts with local server storage to move the guest VMs to a SAN for vMotion, load balencing features etc.
    Can the HP StoreVirtual 4530 server help me to do this? i.e. link both hosts using direct attached method and gigabit ethernet as backup and add datastores from 4530 to migrate the vm guests to?
    the information on HP VSA isntthat clear as to it applications and not much videos for learning it use in VMWare environment.
    i’m also guessing that a second 4530 provides clustering and SAN resiliency if the client request it??

Leave a Reply