VMware Certified Associate

VMware have announced a new track in the certification program.  Previously the first step was a VCP which was ‘Professional Level’ this now becomes the second tier.

The first tier is now ‘VMware Certified Associate’.  The good news is that no prerequisites apply e.g. you don’t need to go on an official VMware course to obtain certification.  However it is recommended that you go on the free self paced e learning classes to prepare.

Four exams are available in the VMware Certificated Associate track, which are:

  1. VCA – Cloud (VCAC510)
  2. VCA – Data Center Virtualization (VCAD510)
  3. VCA – Workforce Mobility (VCAW510)
  4. VCA – Network Virtualization (soon to be released)

VCA

3PAR StoreServ Zoning Best Practice Guide

This is an excellent guide which has been written by Gareth Hogarth who has recently implemented a 3PAR StoreServ and was concerned about the lack of information from HP in relation to zoning.  Being a ‘stand up guy’ Gareth decided to perform a lot of research and has put together the ‘3PAR StoreServ Zoning Best Practice Guide’ below.

This article focuses on zoning best practices for the StoreServ 7400 (4 node array), but can also be applied to all StoreServ models including the StoreServ 10800 8-node monster.

3PAR StoreServ Zoning Best Practice Guide

Having worked on a few of these, I found that a single document on StoreServ zoning BP doesn’t really exist. There also appears to be conflicting arguments on whether to use Single Initiator – Multiple Target zoning or Single Initiator – Single Target zoning. The information herein can be used as a guideline for all 3PAR supported host presentation types (VMware, Windows, HPUX, Oracle Linux, Solaris etc…).

Disclaimer:  Please note that this is based on my investigation, engaging with HP Storage Architects and Implementation Engineers. Several support cases were opened in order to gain a better understanding of what is & isn’t supported. HP recommendations change all the time, therefore it’s always best to speak with HP or your fabric vendor to ensure you are following latest guidelines or if you need further clarification.

Right, let’s start off with Fabric Connectivity

In terms of host connectivity options the StoreServ 7000 (specifically the 7400) provides us with the following:

  • 4x built-in 8 Gb/s Fibre Channel ports per node pair.
  • Optional 8 Gb/s Quad Port Fibre Channel HBA (Host Bus Adapter) per node (we will be focusing on this configuration option).
  • Optional 10 Gb/s Dual Port FCOE (Fibre Channel over Ethernet) converged network adapter per node.

StoreServ target ports are identified in the following manner Node:Slot:Port.

StoreServ target ports located on the on-board HBA’s will always assume the slot identity of 1, respectively StoreServ targets ports located on the optional expansion slot will always assume the identity of slot 2.

StoreServ nodes are grouped in pairs, it’s important to pay particular attention to this when zoning host initiators (server HBA ports) to the StoreServ Target ports.

StoreServ7000-HostPorts

Recommendations

  • Each HP 3PAR StoreServ node should be connected to two fabric switches.
  • Ports of the same pair of nodes with the same ID (value) should be connected to the same fabric.
  • General rule – odd ports should be connect to fabric 1 and even ports should be connected to fabric 2.

Figure 1a below identifies physical cabling techniques, mitigating against single points of failure using a minimum of two fabric switches, which are separated from each other.

The example below illustrates StoreServ nodes with supplementary quad port HBA’s:

figure 1a_StoreServ_nPcabling

Moving on to Port Persistence

As already covered by Craig in this blog post, a host port would be connected and zoned on the fabric switch via one initiator (host HBA port) to one HP 3PAR StoreServ target port (one-to-one zoning). The pre-designated HP 3PAR StoreServ backup port must be connected to the same fabric as its partner node port.

It is best practice that a given host port sees a single I/O path to HP 3PAR StoreServ. As an option, a backup port can be zoned to the same host port as the primary port, which would result in the host port seeing two I/O paths to the HP 3PAR StoreServ system. This would also result in the configuration where a HP 3PAR StoreServ port can serve as the primary port for a given host port(s) and backup port for host port(s) connected to its partner node port.

Persistent ports leverage SAN fabric NPIV functionality (N_Port ID Virtualization) for transparent migration of a host’s connection, to a predefined partner port on the HP 3PAR StoreServ array during software upgrades or node failure.

One of the ways this is accomplished is by having a predefined host facing port on the 3PAR StoreServ array, so that in the event of upgrade (node shutdown) or node down status the partner port will assume the identity of its partner port. The whole process is transparent to the host. When the node returns to normal I/O is failed back to the original target port.

Although unconfirmed I have heard that in in future releases of Inform OS we will get this level of protection at the port level.

Essentially for this to work Port Persistence requires that corresponding ‘native’ and ‘guest’ StoreServ ports on a node pair, be connected to the same fibre channel fabric.

Requirements for 3PAR Port Persistence:

  • The same host ports on the host facing HBA’s in the nodes in a node pair must be connected to the same fabric switch.
  • The host facing ports must be set to target mode.
  • The host facing ports must be configured for point-to-point connections.
  • The Fibre Channel fabric must support NPIV and have NPIV enabled on the switch ports.

Checking and enabling NPIV

Brocade Fabric OS (ensure you have the appropriate license which enables NPIV)

admin> portcfgshow ‘port#’

If the NPIV capability is enabled, the results of the portcfgshow command will identify this, i.e NPIV capability ON.

If the NPIV capability is not enabled, you can turn it on with the following command:

admin> portCfgNPIVPort ‘port#’ 1   (1 = on, 0 = off)

 Cisco MDS Series Switches

fabSwitch # conf t

fabSwitch(config) # feature npiv (Enables NPIV for all VSANs on the switch)

QLogic SANbox 3800, 5000 and 9000 Switches

Don’t require a license, it’s enabled by default (just ensure you are using firmware version 6.8.0.0.3 or above).

Now let’s cover Switch Zoning (Fibre Channel)

SAN zoning is used to logically group hosts and storage devices together in a physical SAN, so that authorised devices can only communicate with each other if they are in the same SAN zone.

The function of zoning is to:

  • Restrict access so that hosts can only see the data they are authorised to see.
  • Prevent RSCN (Registered State Change Notification) broadcasts.

What are ‘RSCNs’ ? RSCNs are a feature of fabric switches.  It’s a service of the fabric that notifies devices of changes on the state of other attached devices. For example if a device is reset, removed or otherwise undergoes a significant change in status.

These broadcasts are made to all members in the configured SAN zone. As hosts and storage targets can be grouped in a zone its best practice to reduce the impact of these types of broadcasts (Note: an argument against RSCN’s causing issues in zoning tables is that newer HBA’s do a good job limiting the impact of these types of broadcasts).  Nevertheless, I prefer limiting the number of initiators and targets in a fabric zone to a minimum.

Zoning Types

  • Domain, Port zoning uses switch domain id’s and port numbers to define zones.
  • Port World Wide Name or pWWN zoning uses port World Wide Names to define zones. Every port on a HBA has a unique pWWN. (A host HBA comprises of a – nWWN & a pWWN, the nWWN refers to the whole device whereas the pWWN refers to the individual port.

The preferred zoning unit for the 3PAR StoreServ is pWWN. If you are currently using Domain, Port migrating to pWWN is very easy. Simply create new zones based on the pWWN of the host and the pWWN of the storage target, add these new zones to your fabric switches, zoning-out the references to Domain, Port for that respective HBA port. Some fabric vendor’s support mixing both Domain, Port and pWWN in the same zone. I prefer using one or the other explicitly.

The following command outputs the StoreServ ports and partner ports, which can be used to identify the node pWWN’s for zoning.

3PAR01 cli% showport 

HP 3PAR StoreServ supports the following zoning configurations:

  • Single initiator – Single Target per zone (recommended)
  • Single initiator – Multiple Targets per zone

Use Single Initiator – Single Target per zone over Single Initiator Multiple Targets per zone to reduce RSCN’s as previously discussed.

At the time of writing, HP 3PAR OS implementation documentation references Single Initiator Multiple Targets as the recommended zoning type. However, when I queried this I was directed to use Single Initiator – Single Target Zoning.  HP support pointed me in the direction of this document which identifies Single Initiator – Single Target zoning as best practice: http://www8.hp.com/h20195/v2/GetDocument.aspx?docname=4AA4-4545ENW

HP will support Single Initiator – Multiple Target, but you should not have a single host initiator attached to more than two StoreServ target ports!

Host port WWN’s should be zoned in partner pairs. For example if a host is zoned to node port 0:2:1, then it should be zoned to node port 1:2:1 (I’m speculating here, but I guess this is because controller nodes mirror cache I/O, so that in the event of node failure write operations in cache are not lost – hence we zone in node pairs and not across nodes from different pairs).

After you have zoned the host pWWN to the StoreServ node pWWN, you can use the 3PAR CLI showhost command to ensure that each host initiator is zoned to the correct StoreServ target ports (ensuring initiators go to different targets over different fabrics).

Figure 1b represents a staggered approach where you would have odd numbered VMware hosts connecting to nodes 0 & 1, and even numbered hosts connecting to nodes 2 & 3 (Note: currently the StoreServ is designed to tolerate a single node failure only, this includes the 8-node StoreServ 10800 array).

The example depicts Single Initiator – Single Target zoning, so a host with two HBA ports connecting over two fabrics will have a total of four zones (two per fabric). In case you were wondering the maximum allowed is eight (also known as fan-in limitation which is four per fabric).

figure 1b_host_zoning

Here are some additional points to be aware of

 Fan-in/Fan-out ratios:

  • Fan-in refers to a host server port connected to several HP 3PAR storage ports via Fibre Channel switch.
  • Fan-out refers to the HP 3PAR StoreServ storage port that is connected to more than one host HBA port via Fibre Channel switch.

Note: Fan-in over subscription represents the flow of data in terms of client initiator to StoreServ target ports. HP/3PAR documentation states that a maximum of four HP 3PAR storage system ports can fan-in to a single host server port (if you are thinking great, I’ll connect my VMware host to 8 ports [four per fabric] think again.  Using this approach when you have hundreds of hosts can quickly reach the maximum StoreServ port connection limitation which is 64!) it’s just not necessary.

StoreServ Target Port Maximums (As per 3PAR InForm OS 3.1.1 please observe the following):

  • Maximum of 16 hosts initiators per 2Gb HP 3PAR StoreServ Storage Port
  • Maximum of 32 hosts initiators per 4Gb HP 3PAR StoreServ Storage Port
  • Maximum of 32 hosts initiators per 8Gb HP 3PAR StoreServ Storage Port
  • Maximum total of 1,024 host initiators per HP 3PAR StoreServ Storage System

HP documentation states that these recommendations are guidelines, adding more than the recommended hosts should only be attempted, when the total expected workload is calculated and shown not to overrun either the queue depth or throughput of the StoreServ node port.

Note: StoreServ storage ports irrespective of speed, will negotiate at the lowest speed of the supporting fabric switch (keep this in mind when calculating the number of host connections).

The following focuses on changing the target port queue depth on a VMware ESX environment.

The default setting for target port queue depth on the ESX host can be modified to ensure that the total workload of all servers will not overrun the total queue depth of the target HP StoreServ system port. The method endorsed by HP is to limit the queue depth on a per-target basis. This recommendation comes from limiting the number of outstanding commands on a target (HP 3PAR StoreServ system port), per ESX host.

The following values can be set on the HBA running VMware vSphere. These values limit the total number of outstanding commands the operating system routes to one target port:

  • For Emulex HBA target throttle = tgt_queue_depth
  • For Qlogic HBA target throttle = ql2xmaxqdepth
  • For Brocade HBA target throttle = bfa_lun_queue_depth

(Note: for instructions on how to change these values follow VMware KB1267‎, these values are also adjustable on Linux Redhat & Solaris).

The Formula used to calculate these values is as follows:

(3PAR port queue depth [see below]) / (total number of ESX severs attached) = recommended value

The I/O queue depth for each HP 3PAR StoreServ storage system HBA mode is shown below:

Note: The I/O queues are shared among the connected host server HBA ports on a first come first serve basis.

HP 3PAR StoreServ Storage HBA I/O queue depth values
Qlogic 2Gb 497
LSI 2Gb 510
Emulex 4Gb 959
HP 3PAR HBA 4Gb 1638
HP 3PAR HBA 8Gb 3276

Well, hopefully you found the above information useful. Here is a high level summary of what we have discussed:

  • Identify and enable NPIV on your fabric switches (Fibre Channel only feature – NPIV-Port Persistence is not present in iSCSI environments)
  • Use Single Initiator -> Single Target zoning (HP will support Single Initiator – Multiple Target, but you should not have a single host initiator attached to more than two StoreServ target ports).
  • A maximum of four HP 3PAR Storage System ports can fan-in to a single host server port.
  • Zoning should be done using pWWN. You should not use switch port/Domain ID or nWWN.
  • A host (non-hypervisors) should be zoned with a minimum of two ports from the two nodes of the same pair. In addition, the ports from a host zoning should be mirrored across nodes.
  • Hosts need to be zoned to node pairs. For example, zoned to nodes 0 and 1 or to nodes 2 and 3. Hosts should NOT be zoned to non-mirrored nodes such as 0 and 3.
  • When using hypervisors, avoid connecting more than 16 initiators per 4 Gb/s port or more than 32 initiators per 8 Gb/s port.
  • Each HP 3PAR StoreServ system has a maximum number of initiators supported, that depends on the model and configuration.
  • A single HBA zoned with two FC ports will be counted as two initiators. A host with two HBA’s, each zoned with two ports, will count as four initiators.
  • In order to keep the number of initiators below the maximum supported value, use the following recommendations:
    • Hypervisors: four paths maximum.
    • Other hosts (non-hypervisors): two paths to two different nodes of the same port pairs.
  • Hypervisors can be zoned to four different nodes but the hypervisor HBAs must be zoned to the same Host Port on HBAs in the nodes for each Node Pair.

Reference Documents

HP SAN Design Reference Zoning Recommendations

HP 3PAR InForm® OS 3.1.1 Concepts Guide

The HP 3PAR Architecture

HP UX 3PAR Implementation Guide

HP 3PAR Red Hat Enterprise Linux and Oracle Linux Implementation Guide

HP 3PAR VMware ESX Implementation Guide

HP 3PAR StoreServ Storage and VMware vSphere 5 best practices

HP 3PAR Windows Server 2012, Server 2008 Implementation Guide

HP Brocade Secure Zoning Best Practises

HP 3PAR Peer Persistence Whitepaper

An introduction to HP 3PAR StoreServ for the EVA Administrator

Building SANs with Brocade Fabric Switches by Syngress

Whats New? StoreVirtual VSA – LeftHand OS 11.0

T-smb-storevirtual-VSA__153x115--C-tcm245-1404104--CT-tcm245-1237012-32It’s no secret that I’m a fan of the StoreVirtual, which you can see by the number of blog posts I have made about the subject.

HP have announced the next iteration of LeftHand OS, which is version 11.0, this has a number of enhancements which are covered by Kate Davis (@KateAtHP).  These include:

  • Smarter updates with Online Upgrade enhancements to identify updates per management group, plus you can choose to only download newer versions, hooray!
  • Faster performance for command-line interface improves response times for provisioning and decommissioning of storage, and retrieving info about managements groups, volumes and clusters
  • Increased IO performance on VMware vSphere with support for ParaVirtualized SCSI Controller (PV SCSI) which provides more efficient CPU utilization on the host server
  • More control over application-managed snapshots for VMware and Microsoft administrators with quicker and simpler install and configuration process
  • Optimization of snapshot management to minimize the burden on the cluster when handling high-frequency snapshot schedules with long retention periods
  • Fibre Channel support for HP StoreVirtual Recovery Manager for servers with FC connectivity to StoreVirtual clusters can be used to recover files and folders from snapshots.
  • LeftHand OS 11.0 will be certified with at least one 10Gbe cards for use with StoreVirtual VSA on launch.

What I’m most excited about is the new Adaptive Optimization feature which is introduced in LeftHand OS 11.0 .  Last night Calvin Zito (@HPStorageGuy) hosted a live podcast covering AO in more depth.  So without further a due:

  • Adaptive Optimization will be completely automated, with a simple on or off.
  • Adaptive Optimization will work automatically e.g. no schedule
  • Adaptive Optimization will use a ‘heat tier’ map to work out the hot areas and check the IO and CPU levels, if these are high then AO will not move the blocks, it will wait until IO and CPU levels have dropped and then perform the region moves.
  • Adaptive Optimization will allow for support of two storage tiers and works at node level.
  • Adaptive Optimization will use a chunk size of 256K for region moves.
  • Adaptive Optimization will work on ‘thick’and ‘thin’ volumes
  • Adaptive Optimization will work on all snapshots of a given volume.
  • Adaptive Optimization will be included for free for anyone who has a StoreVirtual VSA 10TB license already.
  • Adaptive Optimization will not be included for the new 4TB StoreVirtual VSA license
  • Adaptive Optimization will work with PCIe Flash, SSD, SAS and SATA drives.

During the podcast I asked a number of questions, one of which is the potential to use HP StoreVirtual VSA with HP IO Accelerator cards, with C7000 blades and local storage for VDI deployments.  The StoreVirtual representative (who was at LeftHand networks before HP acquired them) mentioned this is the one of the primary use cases for AO and they are going to be performing some benchmarks.

The StoreVirtual representative was also able to field a number of other questions for the StoreVirtual road map which are:

  1. T10 UNMAP will be coming, just not in LeftHand OS 11.0
  2. Changes to LeftHand OS will be made to make manual adjustments to gateway connections for vSphere Metro Storage Clusters see this blogpost.
  3. Adaptive Optimization is likely to be coming to the physical StoreVirtual.

We also spoke about performance, the StoreVirtual representative explained about all the lab tests they had performaned and to get StoreVirtual working at it’s correct capacity you should try and keep the number of nodes per management group to 32 and have a maximum of 16 clusters.

Moving On

Today is my last day at Mirus IT, the last four years with the business have been some of the best, yeah I know it sounds corny, but it really has.

Mirus is a dynamic business and as such have supported me in everything I wanted to learn and have allowed me to go out and design and install some awesome customer solutions, such as:

  • Datacenter consolidation projects
  • Replicated HP 3PAR SAN’s
  • More Lefthands, oops I mean StoreVirtual than I care to remember
  • DR with Site Recovery Manager
  • Veeam Backup & DR Backup (essentially backups available in DR)
  • Exchange DAG’s
  • SQL Clusters
  • VPLS/MPLS solutions
  • Clustered Cisco ASA Firewalls
  • Network transformation projects, extended VLAN’s across non geographic locations
  • VMware Horizon View

The list goes, on, as with any business they have some great people, a few of which I would consider friends, in engineering, sales and pre-sales.  These are the people who make going to work even more fun.

So why have I decided to end this chapter? Well it’s time to depart on a ‘high note’ four years is a long time and I feel that Mirus have had the best from me and I have given the best to them.  I feel that when you are an employee, you need to recognize when you will potentially stagnate and either decided to accept this or move on to do something new.

An opportunity arose at SCC to join them as a Solution Architect.  If you aren’t aware SCC are Europe’s largest independent technology solutions provider.

So what will I be doing at SCC? Well I will be working with a group of Solution Architects engaging in pre-sales activities designing solutions for customers with a focus on vSphere/Horizon View/Workspace and naturally the networking, storage and applications that come with them, something I’m looking forward to getting stuck into.

SCC have already exceeded my expectations by arranging a ‘welcome’ evening drinks so that I could meet the ‘team’ before I started, something which on reflection isn’t generally done by most employers.  I’m sure most of you have had the same experience as me, the first day you meet HR and go over the obligatory manual handling procedures and then you meet colleagues over the coming days, weeks and months.

The new challenge starts next Monday 19th, it’s going to be epic!

Pre Sales – Design Considerations

Following on from the previous blog post ‘Whats This Pre Sales Thing All About?‘ which was aimed at understanding what a Pre Sales Engineer does, I thought it would be relevant to put together a blog post on the design considerations.

This isn’t meant to be a technical post, more so, what are the infrastructure pieces you should be questioning, so that your solution isn’t missing any essential pieces.  This isn’t going to be a complete coverall, but hopefully should send you down the right path and get you asking more questions about your design!

Business Considerations

Generally speaking, I normally lead with business considerations, this is trying to understand what the client is trying to achieve, essentially, what are you looking to achieve and anything that could influence the design.

  1. What is the business driver behind the work?
  2. Does the business have to comply with any legislation?
  3. Does the business comply with any governance such as infrastructure security risk policies?
  4. Does the business have plans for contraction or expansion over the next three to five years?
  5. Will the business be opening any new offices?
  6. Is the business considering any mergers or take overs?
  7. What growth is required from the infrastructure in terms of capacity and performance
  8. Anything else you think we should be made aware off?

Applications/Software

These are often the reason you are sitting in front of the customer having a discussion about the infrastructure required for the new piece of software.

  1. List your applications in terms of priority.
  2. How long can these applications be out of action?
  3. Are you adding any new applications?
  4. What are the application inter dependencies?
  5. What applications are you upgrading/changing?
  6. Are any applications latency sensitive?
  7. Does application clustering need to be considered?
  8. How is the application going to be packaged?
  9. How is the application going to be delivered to the users device?
  10. How is the application going to be managed ongoing?

Networking

The network is key, always consider optimal routing paths e.g. if you have a managed firewall at a colo, but your DMZ sits in production.  Consider having a firewall in production for the DMZ so that traffic from WAN > DMZ > LAN doesn’t trombone the VPLS/MPLS.

  1. What VLAN’s/subnet’s are used and for what purpose?
  2. What is the bandwidth between sites?
  3. What is the latency between sites?
  4. Are links Layer 2 or Layer 3?
  5. What routing protocols are used?
  6. Is QoS being used?
  7. What are you using for DHCP at each site, are relays in place?
  8. Does remote access need to be considered? If so who requires it?
  9. Is clientless access a requirement for remote access?
  10. Is two factor authentication a requirement?
  11. Does a reverse proxy need to be included to facilitate software such as Lync?
  12. Do load balancers (local/global) need to be considered?
  13. Are HA firewalls required with no session loss?
  14. Is IDS required?
  15. Are diverse WAN links required at all sites?
  16. What encryption/authentication is required for VPN’s?
  17. Does the encryption domain needed to be NAT’d?
  18. Is LACP being used between Core and Edge switches?
  19. Would stretching VLAN’s help the design for backups, replication, WAN failure?
  20. Are enough network ports available?

Storage

Almost as key as networking, consider your performance and capacity requirements now and also for the future.

  1. What capacity is required?
  2. What are the back end/front end IOPS?
  3. What latency is required?
  4. What is the read/write ratio and the write penalty?
  5. Is snapshot/replication needed if so does it need to be ‘sync’ or ‘a sync’?
  6. Can the SAN grow to meet the capacity/performance requirements?
  7. What availability does the SAN need to provide e.g. does it need to be clustered?
  8. Does the customer have an existing iSCSI/Fabric switches that can be utilized?
  9. Does block size need to be adjusted?
  10. Is VAAI a requirement?
  11. Is Thin Provisioning supported and can the SAN stay thin using T10 UNMAP?
  12. Is de-duplication a consideration?
  13. Does an existing SAN need to be decommissioned? If so how are the volumes/data going to be migrated?

vSphere

If the storage and networking are right, then the vSphere design should be a walk in the park.  Remember if you are performing a capacity assessment on a Windows Server 2003 environment and the customer is moving to Windows Server 2012, then you need to allow for extra to memory/cpu/disk to accommodate this.

Note any items already mentioned in previous sections, should also be considered for the vSphere environment.

  1. What redundancy is required? N+1, N+2 etc
  2. How many vCenter’s are needed?
  3. What database is going to be used for vCenter components?
  4. How many hosts are needed?
  5. How many virtual machines will be required?
  6. What is the memory overhead of the VM’s?
  7. Are queue depths a consideration? (How many VM’s will be placed on each datastore)
  8. Moving from VMFS3 to VMFS5?
  9. Considering host evacuation is scale up or out right?
  10. How are the hosts going to be patched?
  11. What permissions are required for vCenter?
  12. What service accounts are required to run all vCenter components?
  13. What networking is required at vSwitch level? LACP, Route based on virtual NIC load?
  14. Do we need to pass any devices through to VM’s directly?
  15. Do any VM’s require high performance/low latency guarantees?
  16. Are resource pools required?
  17. How is the vSphere environment going to be monitored?
  18. How may NIC’s are required for LAN,DMZ,WAN,iSCSI,NFS,vMotion,FT, Management?
  19. What identity sources are required for SSO?
  20. Do the default vCenter certificates need to be replaced?
  21. Which HA policy is most suitable?
  22. Do Storage DRS rules need to be considered?
  23. What Anti Affinity and Affinity rules are required?
  24. What firewall rules are required?
  25. What VM’s need to be restarted in what order if a failure occurs?
  26. Does VM monitoring need to implemented?
  27. How are alerts going to be generated?
  28. Where are any ISO’s etc going to be held?
  29. Is network traffic management or optimization required?
  30. Is boot from SAN a requirement?
  31. Is link state tracking required for downstream ports?
  32. Do MTU’s need to be considered?
  33. Does EVC mode need to be enabled?
  34. How many VM templates are needed?
  35. What VMDK types will be needed, Thick Eager Zeroed, Lazy Zeroed, Thin?

Backups

You have this ‘shiny’ new infrastructure how is going to be backed up?

  1. What RPO/RTO is required?
  2. Does the 3 backup copies, 2 onsite, 1 offsite rule apply?
  3. What’s the backup windows (if any)?
  4. What backup media is going to be used?
  5. What types of backups are required, full, incremental, differential, reverse etc?
  6. How are the backups going to get from source to destination?
  7. What backup throughput is needed?
  8. What impact can backups have on production servers during working hours?
  9. Do backups need to be available in DR?
  10. Does backup validation need to be considered (will the backups work if needed)?

DR

This is one of the broadest subjects that can be narrowed down quickly by asking the right questions.

  1. What is the impact to the business if you aren’t able to work for 24, 48 and 72 hours?
  2. Does all of data need to be available in DR?
  3. Do all the servers need to be able to run in DR?
  4. Do you need the ability to perform test failovers?
  5. What is the data change rate?
  6. What is the time frame allowed to have users up and working in DR?
  7. What percentage of users need to work in DR?
  8. What severs need to be running DR on a permanent basis e.g. SQL, vCenter, DC
  9. Are you willing to accept a performance hit in DR?
  10. How are you going to failover services such as email/remote access?
  11. Will the servers subnets/IP address’s/default gateway/DNS need to change?