vSphere 5.1 – My Take On What’s New/Key Features

With the release of vSphere 5.1, it’s been tough keeping up with all the tweets and information from VMworld 2012 San Francisco.

With the plethora of data, I thought it would be handy to blog about what the key features that will have the biggest impact on my every day life.

Licensing

vRAM – It’s gone, licensing is back to per physical processor.

vSphere Essentials Plus – Now includes vSphere Storage Appliance and vSphere Replication.

vSphere Standard  – Now includes vSphere Storage Appliance, vSphere Replication, Fault Tolerance, Storage vMotion and vCentre Operations Manager Advanced.

Beneath The Hood

Monster Virtual Machines

Virtual Machines, can now have the following hardware features:

1TB RAM
64 vCPUs
> 1 Million IOPS per VM

Wonder if I will continue to have those we need a physical SQL server conversation?

This is made possible by Virtual Machine Format 9.

vMotion

vMotion no longer requires shared storage.  This has been achieved by combining vMotion and Storage vMotion into a single operation.  So when a VM is moved, it moves the memory, processing threads and disk over the network to it’s target.

Now what is really, cool it maintains the same performance levels as the older vMotion with shared storage!

Note, I recommend that you use multiple NIC’s for vMotion as per my post High Availability for vMotion

vSphere Replication

Enables virtual machine data to be replicated over LAN and WAN.  Previously to achieve 15 minutes  a-synchronous replication you need sub 2 ms latency.

vSphere Replication integrates with Microsoft’s Volume Shadow Copy (VSS) ensuring that applications such as Exchange and SQL will be in a consistent state if DR was implemented.

vSphere Replication can be used for up to 500 virtual machines.

The initial seed can be done offline and taken to the destination to save bandwidth and time.

VMware Tools

No more downtime to upgrade VMware Tools.

vSphere Web Client

This is going to be the tool for administrating vCentre.  Some pretty cool features like vCenter Inventory Tagging, which means you can apply meta data to items and then such on them e.g. group applications together for a particular department or vendor.

We now have the ability to customise the web client to give it ‘our look and feel’.

Always getting called away when you are half way through adding a vNIC to a VM, well we can now pause this and it appears in ‘work in progress’ so we never forgot to complete an action.

For the pub quiz fans, you can have 300 concurrent Web Client users.

Link Aggregation Control Protocol Support

Used to ‘bind’ several physical connections together for increased bandwidth and link failure (think Cisco Port Channel Groups), this is now a supported feature in vSphere 5.

Memory Overhead Reduction

Every task undertaken by vSphere has an overhead, whether this is a vCPU or a vNIC, it requires some attached memory.  A new feature allows upto 1GB of memory back from a vSphere host which is under pressure.

Latency Sensitivity Setting

vSphere 5.1 makes it easier to support low latency applications (something which I have encountered with Microsoft Dynamics AX).  The ability to ‘tweek’ latency for an individual VM is great.

Storage

We now have 16Gb Fiber Channel support and iSCSI Storage Driver has been upgraded. Some very impressive increases in performance.

Thin provisioning has always been an issue unless your array supported T10 UNMAP.  With vSphere 5.1 a new virtual disk has been introduced the ‘sparse virtual disk’ AKA SE spare disk.  It’s major function is to reclaim previously used space in the guest OS.  This feature alone is worth the upgrade.

Fabric Zoning Best Practices

After yesterdays post on HBA’s I was thinking about fibre channel, which leads in nicely to todays post about fabric zoning best practices.

So, what is a ‘Single Initiator Zone’ and why do we implement them?

An initiator is the HBA in your ESXi Host, typically these are two port or perhaps in four port depending on your requirements.  Each port is known as an initiator.
Part of your VMware design would be to have at least two HBA’s with two ports (initiators) for redundancy. These would then connect to the storage processor on your SAN (the target) which would have four  ports, two on each disk controller.

We then have two fabric switches for redundancy to ensure that our SAN continues to recieve storage requests if a single fabric switch failes.

Following this through our ESXi Host has ports E1 & E2 on HBA1 and E3 & E4 on HBA2.  The SAN has S1 & S2 on disk controller 1 and S3 & S4 on disk controller 2.

From this we will end up with eight zones, as each zone has a single initiator and single target.

E1 to S1 via Fabric Switch 1
E1 to S3 via Fabric Switch 2
E2 to S2 via Fabric Switch 1
E2 to S4 via Fabric Switch 2
E3 to S1 via Fabric Switch 1
E3 to S3 via Fabric Switch 2
E4 to S2 via Fabric Switch 1
E4 to S4 via Fabric Switch 2

If your like me, then looking at a picture makes a lot more sense

Brocade produce a ‘Fabric Zoning Best Practices’ White Paper, which is the paper I tend to follow when implementing fabric zoning.

The white paper can be found here

Don’t forget that Fabric Zoning has nothing to do with LUN masking which is used to choose which servers are allowed to see which LUN.  For example in an vCenter environment you would normally want all of your hosts to be able to see all of the LUN’s for vMotion to work.  The only expection to this would be if you had multiple clusters where you would LUN mask each clusters hosts.

Installing HBA Drivers On vSphere 5

Bit of a personal post for me to be honest as I have to keep looking this up!

Depending on the size of the vSphere Cluster you are going to install, if you are like me, you might not always have the luxury of time and therefore cannot create a customised ESXi5 install.

I’m assuming that you have downloaded the ESXi5 HBA Drivers from the manufacturers’ website to your local machine and you have the vSphere Client installed.  In this demo I will be using some Brocade Drivers.

I generally rename any local HDD to the same name as the ESXi host for example ESXi08HDD

We then use the vSphere Client to browse the datastore, then create a new folder called ‘Brocade’ to upload the driver file to.

We now need to ensure that we have enabled SSH which can be done on the console of the ESXi Host by going logging in and going to ‘Troubleshooting Mode Options’ and then Enable ESXi Shell.

I use Putty as my SSH client, which is available from this link SSH onto the ESXi Host and enter your credentials. Note all of the following commands are after the #

cd /vmfs/volumes/ESXi08HDD/Brocade (Enter)
ls
We should be able to see the driver we uploaded
Then run the following commands:
cp brocade.tar.gz /tmp
cd /tmp
tar zxvf brocade.tar.gz
./brocade_install_esxi.sh -n

sync;reboot

Once the host is back online, you will see you HBA’s installed and ready for use under Storage Adapters.