LeftHand OS 10.0 – Active Directory Integration

I upgraded the vmFocus lab last night to LeftHand OS 10.0 as with anything new and shiny, I feel an overwhelming urge to try it!

So what’s new? Well according to the HP Storage Blog the following:

  • Increased Windows integration – We now offer Active Directory integration which will allow administrators to manage user authentication to HP StoreVirtual Storage via the Windows AD framework. This simplifies management by bringing SAN management under the AD umbrella. With 10.0 we are also providing support for Windows Server 2012 OS.
  • Improved performance – The engineering team has been working hard with this release and one of the great benefits comes with the performance improvements. LeftHand OS version 10.0 has numerous code enhancements that will improve the performance of HP StoreVirtual systems in terms of application performance as well as storage related functions such as snapshots and replication. The two major areas of code improvements are in multi-threading capabilities and in internal data transmission algorithms.
  • Increased Remote Copy performance – You’ll now experience triple the performance through optimization of the Remote Copy feature that can reduce you backup times by up to 66%.
  • Dual CPU support for VSA – In this release, the VSA software will now ship with 2 vCPUs enabled. This capability, in addition multi-threading advancements in 10.0, enhances performance up to 2x for some workloads. As a result of this enhancement, we will now also support running 2 vCPUs in older versions of VSA. So if you’ve been dying to try it, go ahead. Our lab tests with SAN/iQ 9.5 and 2 vCPUs showed an up to 50% increase in performance.
  • Other performance improvements – 10.0 has been re-engineered to take advantage of today’s more powerful platforms, specifically to take better advantage of multi-core processors, and also improves the performance of volume resynchronization and restriping and merging/deleting snapshot layers.

Active Directory Integration

The first thing I wanted to get up and running was Active Directory integration.  So I went ahead and created a Security Group called CMC_Access

CMC SG

Naturally, we need a user to be in a Security Group, so I created a service account called CMC and popped this into the CMC_Access Security Group

CMC User

Into the CMC, oops I mean the new name which is HP LeftHand Centralized Management Console.  Expand your Management Group and Right Click Administration and Select Configure External Authentication.

CMC External Authentication 1

Awesome, we now need to configure the details as follows:

  • Bind User Name the format is username@domain.  So in my case it’s cmc@vmfocus.local
  • Bind Password is your password, so in my case it’s ‘password’
  • Active Directory Server IP Address 192.168.37.201 (which is VMF-DC01), your port is 389
  • Base Distinguished Name this is DC=vmfocus, DC=local

CMC External Authentication 2

Hit ‘Validate Active Directory’ and you should be golden.

CMC External Authentication 3

Hit Save, don’t worry it will take a while.

TOP TIP: If your note sure what your Base Distinguished Names is, launch ADSI Edit and that will soon tell you.

Next we need to Right Click on Administration and choose New Group

CMC External Authentication 4

Give your Group a name and a Description, I’m going to roll with cmc_access (I know original) and they are going to have Full rights.   We then need to click on Find External Group

CMC External Authentication 5

In the ‘Enter AD User Name’ enter the Bind User Name from the External Authentication, so in my case this is cmc@vmfocus.local and hit OK

CMC External Authentication 6

If all has gone to plan, you should see your Active Directory Group, select this and hit OK

CMC External Authentication 7

It should appear in the Associate an External Group dialogue box, hit OK

CMC External Authentication 8

Then logout and log back in again as your Active Directory user, making sure that you use the format name@domain

CMC External Authentication 9

One of the odd things that I have noticed, is that it takes an absolute age to login, not sure why this is, but I’m sure HP will fix it in an upcoming release!

Part 5 – Configuring Site Recovery Manager (SRM) With HP StoreVirtual VSA

This is the final post on my blog series Configuring Site Recovery Manager (SRM) with HP StoreVirtual VSA.

If you have missed any of the previous posts, they are available here:

Part 1 – Configuring Site Recovery Manager (SRM) With HP StoreVirtual VSA

Part 2 – Configuring Site Recovery Manager (SRM) With HP StoreVirtual VSA

Part 3 – Configuring Site Recovery Manager (SRM) With HP StoreVirtual VSA

Part 4 – Configuring Site Recovery Manager (SRM) With HP StoreVirtual VSA

As promised we are going to failover, reprotect and failback. Is it slightly wrong, that I’m excited about this blog post?

Pre Failover

As we are good boy/girl scouts, we wouldn’t just jump straight in and try and failover would we? No, never instead we are going to check everything is ‘tickety boo’ with our environment.  This means going over the following checklist:

  • Check CMC to ensure no degraded volumes
  • Check CMC to ensure that remote copy is working correctly
  • Check vCenter to ensure that you have connectivity between sites
  • Check SRM Array Managers and refersh your Devices
  • Check Protection Groups
  • Check Recovery Plan

Once you have gone over the above list, the last thing to do is test and clean up.

Look’s like we are cooking on gas.

Failover

We have two types of failover, planned and unplanned.

Planned Failover is when you know of impending works which will make your Production site non operable for a period of time, this could be planned  maintenance work or site relocation.  Imagine you are building a new Head Office, you configure all of your network, storage and vSphere infrastructure and then just use SRM to failover over a weekend.

Unplanned Failover this is when, you earn your ‘bacon’ as a vSphere Administrator, as you have a man down situation and no Production site left.

In this instance we are going to do a planned failover, as you can see VMF-TEST01 is running in our Production site.

VMF-TEST01 is in a good place, as it’s being replicated to our DR site

Let’s get it on, into SRM, then click on Recovery Plans, then onto Recovery Steps (so that we can see what’s going on) and then click on Recovery!

The Red Stop Sign cracks me up, it’s SRM’s way of saying are you really sure you want to do this? We are sure, so we want to put a tick in the ‘I understand that this process will permanently alter the virtual machines and infrastructure of both the protected and recovery datacenters.’

We are going to perform a ‘Planned Migration’ and then click Next

We are now at the point of no return, click Start

OK, what’s going on? Well the let’s have a closer look.

Step 1 SRM takes a snapshot of the replicated volume PR_SATA_TEST01 before it tries to failover, this is for safety.

Step 2 SRM shuts down the VM’s at Protected Site, in this case VMF-TEST01 to avoid any data loss

Step 3 SRM restores any hosts from standby at the DR Site

Step 4 SRM takes another snapshot and syncronizes the storage

Step 5 Epic Fail!

OK what happened? Well we have the error message ‘Error: Failed to promote replica devices. Failed to promote replica device ‘1266d2456f’ This means that for some reason SRM wasn’t able to promote the DR volume DR_SATA_TEST01 to Read/Write from Read. To be perfectly honest, I have tried many times to get this to work and for some reason it always fails on this step.  Strange really as when we before a test it takes a snapshot of the volume DR_SATA_TEST01 and promotes this to Read/Write without any issues. So in this situation we are going to need to give SRM a hand.

Go into the CMC and expand your Management Groups and Clusters until you get this view.

We are going to Right Click DR_SATA_TEST01 and Select Failover/Failback Volume

Click Next and then Select ‘to fail over the primary volume, PR_SATA_TEST01, to this remote volume, DR_SATA_TEST01 and click Next

Good news that we haven’t got any iSCSI sessions in place, so we can click Next

Double check your provisioning is correct, and then click Finish

Awesome, we should now have the volume DR_SATA_TEST01 acting as a Primary Read/Write Volume, you can tell this as it should be in dark blue

I think we should try the Recovery again now, let’s hop back into SRM and click on Recovery.

Select the ‘I understand that this process will permanently alter the virtual machines and infrastructure of both the protected and recovery datacenters.’ tick box again and click Next and Start.

Hopefully you should see that SRM jumps straight to Step 8, Change Recovery Site Storage to Writeable and this time it has been a Success!

Time for a quick brew, whilst SRM finishes off bringing VMF-TEST01 up at our DR site.

Boom, the man from Delmonte he say yes!

So let’s see what’s going on shall we.  First of all at our Production site.  As you can see SRM now knows that the VMF-TEST01 is not live.

At DR, VMF-TEST01 is up and running and it’s IP Address has been successfully changed.

The question is can we ping it by DNS, as this should have been updated.

Boom, all working as expected.

Last of all, let’s check CMC to see what’s going on with our HP StoreVirtual VSA.

Now you may be thinking, it’s not really the best situation to be in as we have two Primary Volumes which are PR_SATA_TEST01 and DR_SATA_TEST01.  But don’t fear SRM has changed PR_SATA_TEST01 to ‘read’ only access for ESXi02

Also, if we check the Datastores on ESXi02, we see that PR_SATA_TEST01 has disappeared.

Cool, I think we are now in a position to Reprotect.

Reprotect

Reprotection reverses the process, so that the DR site becomes the protected site and Production becomes the DR site, simples.

So let’s jump back into SRM and click Reprotect

Select ‘I understand that this operation cannot be undone.’ and click Next

Let’s click Start and watch the process in action.

OK, what’s going on then Craig?

Step 1 SRM realises it can’t have two Primary Volumes and demotes PR_SATA_TEST01 to a Remote Volume and then deletes it

Step 2 SRM takes a snapshot of DR_SATA_TEST01 and before it starts the reverse protection as a safety measure

Step 3 SRM takes a further snapshot and invokes the replication schedule

Step 4 SRM cleans up the storage to make sure everything is ‘tickety boo’

If everything was a success you should see that your Recovery Plan has gone back to normal.

From HP StoreVirtual VSA perspective everything looks good, DR is the Primary Volume and Production is the Remote Volume

Right then, I think we should think about failing back then.  Before we do so, we need to run over that checklist again.

  • Check CMC to ensure no degraded volumes
  • Check CMC to ensure that remote copy is working correctly
  • Check vCenter to ensure that you have connectivity between sites
  • Check SRM Array Managers and refersh your Devices
  • Check Protection Groups
  • Check Recovery Plan

Once you have gone over the above list, the last thing to do is test and clean up.

Good times, everything was a success, I think we are ready to failback.

Failback

Failback is actually just a Recovery as far as SRM is concerned.  So I won’t bother waffling on about it again, so let’s hit Recovery

I wanted to show you that this time round, SRM was able to promote the Remote Volume to Primary Read/Write without any issues.

Nice one, we have another success and VMF-TEST01 is running back at Production.

Let’s do the obligatory ping test via DNS, again success.

Quick look at our DR site and you can see SRM now sees VMF-TEST01 as being protected

Lastly, a look at CMC to check on our HP StoreVirtual VSA, as you can see we still have two Primary copies, but again DR_SATA_TEST01 is now read only

A couple of final thoughts for you.

  1. It’s quite normal to see a ‘ghost’ datastores at either your Production or DR site after you have failed over or back. Just perform a ‘Rescan’ and it will disappear
  2. Check your path policies for the Datastore, as these don’t always go back to your preferred choice.

Thank’s for reading what probably feels like war and peace to you on SRM, I hope you agree it’s an amazing product that makes our life as the IT administrator that much easier!

SRM & P4000 – Error: Failed To Promote Replica Devices

‘Error: Failed to promote replica devices. Failed to promote replica device ‘1266d2456f’ This means that for some reason SRM wasn’t able to promote your replica volume from Read to Read/Write which in P4000 terms is Remote to Primary volume. To be perfectly honest, I have tried many times to get this to work and for some reason it always fails on this step.  Strange really as when you perform a test failover on the same volume, it takes a snapshot of the Read (Remote) volume and promotes this to a Read/Write (Primary) without any issues.

So in this situation we are going to need to give SRM a hand.

Go into the CMC and expand your Management Groups and Clusters until you get this view.

We are going to Right Click DR_SATA_TEST01 and Select Failover/Failback Volume

Click Next and then Select ‘to fail over the primary volume, PR_SATA_TEST01, to this remote volume, DR_SATA_TEST01 and click Next

Good news that we haven’t got any iSCSI sessions in place, so we can click Next

Double check your provisioning is correct, and then click Finish

Awesome, we should now have the volume DR_SATA_TEST01 acting as a Primary Read/Write Volume, you can tell this as it should be in dark blue

I think we should try the Recovery again now, let’s hop back into SRM and click on Recovery.

Select the ‘I understand that this process will permanently alter the virtual machines and infrastructure of both the protected and recovery datacenters.’ tick box again and click Next and Start.

Hopefully you should see that SRM jumps straight to Step 8, Change Recovery Site Storage to Writeable and this time it has been a Success!

Boom, the man from Delmonte he say yes!

P4000: An Error Occurred While Reading The Upgrade Configuration File

With any device, it is important to keep up to date with the latest firmware the vendor can offer.

I always check the manufactures websites on a monthly basis to see if anything is new,  with this in mind, I was trying to update my P4000 StoreVirtual VSA today and I kept getting the following error message:

‘An error occurred while reading the upgrade configuration file.  If the file was from a web connection, click Try Download Again, otherwise recreate your media image’.

A quick check in Help > Preferences > Upgrades I saw that the Download Directory location didn’t look quite right.

So I entered a at the end of the Download Directory location

Clicked on OK and started the download again, voila this time it worked!

Part 3 – Automating HP StoreVirtual VSA Failover

In part two we installed and configured HP StoreVirtual VSA on vSphere 5.1 in this blog post we are going to look at automating failover.

I think a quick recap is in order.  If you remember we received a warning when adding SATAVSA01 and SATAVSA02 to the Management Group SATAMG01.  Which was:

‘to continue without installing a FOM, select the checkbox below acknowledging that a FOM is required to provide the highest level of data availability for a 2 storage system management group configuration. Then click next’.

This error message is about quorum, a term that I’m sure alot of you are familiar with when working with Windows clusters.  Each VSA run’s whats known as a ‘manager’ which is really a vote.  When we have two VSA’s we have two votes, which is a tie.  Let’s say that one VSA has an issue and goes down, how does the the remaining VSA know that? Well it doesn’t.  It could be that both VSA’s are up and they have lost’s the network between them.  This then result’s in split brain scenario.

This is where the Failover Manager comes into play.  So what exactly is a Failover Manager? Well it’s specialized version of the SAN/iQ software which runs under ESXi, VMware Player or the elephant in the room (Hyper V).  It’s purpose in life is to be a ‘manager’ and maintain quorum by introducing a third vote ensuring access to volumes in the event of a StoreVirtual VSA failure.  The Failover Manager is downloaded as an OVF and the good news is we already have a copy which we have extracted.

A few things to note about the Failover Manager.

  • Do not install the Failover Manager on a StoreVirtual VSA you want to protect,as if you have a failure the Failover Manager will loose connection.
  • Ideally it should be installed at a third physical site.
  • Bandwidth requirements to the Failover Manager should be 100 Mb/s
  • Round trip time to the Failover Manager should be no more than 50ms

In this environment we will be installing the Failover Manager on the local storage of ESXi02 and placing it into a third logical subnet.  I think a diagram and a reminder of the subnets are in order.

Right then, let’s crack on shall we.

Installing Failover Manager

We are going to deploy SATAFOM onto ESXi02 local hard drive which is called ESXi02HDD (I should get an award for my naming conventions).

The Failover Manager or FOM from now on, is an OVF so we need to deploy it from vSphere Client.  To do this click File > Deploy OVF Template.

Browse to the location of your extracted HP StoreVirtual VSA files ending in FOM_OVF_9.5.00.1215FOM.ovf

Click Next on the OVF Template Details screen and Accept the EULA followed by Next.  Give the OVF a Name in this case SATAFOM and click Next.  When you get to the storage section you need to select the local storage on a ESXi Host which is NOT running your StoreVirtual VSA.  In this case it is ESXi02HDD

Click next and select your Network Mapping and click Finish.

TOP TIP, don’t worry if you cannot select the correct network mapping during deployment. Edit the VM settings and change it manually before powering it on.

If all is going well you should see a ‘Deploying SATAFOM′ pop up box.

Whilst the FOM is deploying let’s talk networking for a minute.

On ESXi02, I have a subnet called FOM which is on VLAN 40.  We are going to pop the vNIC;s of SATAFOM into this.  The HP v1910 24G is the layer three default gateway between all the subnets and is configured with VLAN Access Lists to allow the traffic to pass (I will do a VLAN Access List blog in the future!)

Awesome let’s power the badboy on.

We need to use use the same procedure we used to set the IP address’s on the FOM as we did on the VSA.  Hopefully you should be cool with this, but if you need a helping hand refer back to How To Install & Configure HP StoreVirtual VSA On vSphere 5.1

The IP address’s I’m using are:

  • eth0 – 10.37.40.1
  • eth1 – 10.37.40.2

Failover Manager Configfuration

Time to fire up the HP Centralized Management Console (CMC) and add the IP Address into  Find Systems.

Log into view SATAFOM and it should appear as follows.

Let’s Rich Click SATAFOM and ‘Add to an Existing Management Group’ SATAMG01

Crap, Craig that didn’t work, I got a popup about a Virtual Manager. What’s that all about?

Nows a good time as any to talk about two other ways to failover the StoreVirtual VSA.

Virtual Manager this is automatically added to a Management Group that contains an even number of StoreVirtual VSA’s.  If in the event you have a VSA failure you can start the Virtual Manager manually on the VSA which is working.  Does it work? Yes like a treat but you will have downtime until the Virtual Manager is started and you nerd to also stop it manually when the failed VSA is returned to action.  Would I use it? If you know your networking ‘onions’ you should be able configure the FOM in a third logical site to avoid this scenario.

Primary Site in a two manager configuration you can designate one manager (StoreVirtual VSA) as the Primary Site.  So if the secondary VSA goes offline you maintain quorum.  The question is why would you do this? Honestly I don’t know, because unless you have some proper ninja skills, how do you know which VSA is going to fail? Also you need to manually recover quorum, which isn’t for the feint heated.  My recommendation, simples, avoid.

OK back on topic.  We need to remove the Virtual Manager from SATAMG01, which is straight forward.  Right Click > Delete Virtual Manager.

Let’s try adding the SATAFOM back into Management Group SATAMG01.  Voila it works!  You might get a registration is required notice, we can ignore that as I’m assuming you have licensed your StoreVirtual VSA.

(I know I have some emails, they are to do with feature registration and Email settings)

Let’s Try & Break It!

Throughout this configuration we have used the following logic:

  • SATAHDD01 runs SATAVSA01
  • SATAHDD02 runs SATAVSA01
  • SATAVSA01 and SATAVSA02 are in Management Group SATAMG01
  • SATAVSA01 and SATAVSA02 have a volumes called SATAVOL01 and SATAVOL02 in Network RAID 10

In my lab I have a VM called VMF-DC01 which you guessed it is my Domain Controller, it resides on SATAVOL02.

Power Off SATAVSA01

We are going to power off SATAVSA01 which will mimic it completely failing, no shutdown guest for us!  Fingers crossed we should still maintain access to VMF-DC01.

Crap we lost connection for about 10 seconds to VMF-DC01 and then it returned whys that Craig you ask?

Well if you remember all the connections go to a Virtual IP Address in this case 10.37.10.1 This is just mask as even though the connections hit the VIP, they are directed to one of the StoreVirtual VSA, in this case SATAVSA01.

So when we powered off SATAVSA01 all the iSCSI connections had to be ceased and then represented back via the VIP to SATAVSA02.

Power Off SATAVSA02

To prove this, let’s power on SATAVSA01 and wait for quorum to be recovered.  OK let’s power off SATAVSA02 this time and see what happens.

I was browsing through folders and received a momentary pause of about one second which to be fair on a home lab environment is pretty fantastic.

So what have we learned? We can have Network RAID  1 with Hardware RAID 0 and make our infrastructure fully resilient.  To sum up, I refer back to my opening statement which was the HP StoreVirtual VSA is sheer awesomeness!