Azure AD: Transfer Subscriptions or Directory?

With the increased uptake of Azure across both public and private businesses, we are starting to see identity gaps across business divisions creating pockets of isolation.

In the diagram below we have a single Enterprise Enrollment which has two Azure Accounts, one for Online Services and the another for Retail Stores.  Underneath these we then have two Azure Subscriptions, one for Development and the other for Production.

Azure Accounts & Subscrptions v0.1.png

You might wonder what the issue is?  Well in this scenario we have a single on-premises corporate directory that services ‘Online Services’ and ‘Retail Stores.

  • ‘Online Services’ have setup their on-premises corporate directory to integrate with Azure AD, so that their starters and leavers process is controlled using their existing directory service.
  • Whereas ‘Retail Stores’ have no integration to the on-premises corporate directory and are using the default on.microsoft.com accounts

Both business divisions have rolled out Production & Development services, but we need to close the security gap to ensure that both divisions are using the corporate directory as part of their identity model.

To achieve this we have two choices available to us, Transfer Directory or Subscription.

A subscription can only be associated to a single directory

The next part of this blog post has been written by my colleague Graham Lindsay, Lead Architect and one of our identity experts.

Transfer Directory

This will not change the Account Admin or the billing, it purely modifies which directory the subscription is linked and can be completed using portal.azure.com.

Create Guest B2B account in the receiving directory using the email address of the Service Admin of the subscription to be switched . This can be a standard non admin user.

Transfer 01

From the service admin account accept the B2B invite.

Transfer 02.jpg

Once the service admin account has accepted the B2B invite it will now be able to view the receiving directory within the directory switcher.

Transfer 03.jpg

Staying within the subscription hosting directory (TestCorp) locate the subscription to be transferred and choose change directory.

Transfer 04.jpg

From the drop choose the receiving directory being (GrahamLab).

Transfer 05.jpg

Once the change has occurred, the subscription will no longer be accessible in the in the TestCorp Directory.
Transfer 06.jpg

Using the directory switcher specify the receiving directory.

Transfer 07.jpg

 

Open Subscriptions and you will now see that the subscription has now moved.  You can now rebuild the RBAC on the subscription.

Transfer 08.jpg

Transfer Subscription

First of all it’s worth noting that only the following Subscriptions can be transferred.

  • Enterprise Agreement (EA) MS-AZR-0017P
  • Microsoft Partner Network MS-AZR-0025P
  • MSDN Platforms MS-AZR-0062P
  • Pay-As-You-Go MS-AZR-0003P
  • Pay-As-You-Go Dev/Test MS-AZR-0023P
  • Visual Studio Enterprise MS-AZR-0063P
  • Visual Studio Enterprise: BizSpark MS-AZR-0064P
  • Visual Studio Professional MS-AZR-0059P
  • Visual Studio Test Professional MS-AZR-0060P

Subscriptions can only be transferred to someone in the same country

When transferring the subscription this changes the entire subscription including billing.

  • For Enterprise Agreements this is done in the EA portal
  • For Non-Enterprise Agreements this is done in the billing portal

Within the billing portal locate the subscription to be transferred and choose transfer subscription.

Transfer 09.jpg

From here you can just change just the Account Admin or you can change the Account Admin and where the subscription is linked to. To transfer the whole thing and change the service administrator as well untick the retain this subscription with my AzureAD.

Transfer 10.jpg

Enter the name of the account who will be taking over the subscription (I chose to switch the AzureAD directory too)

Transfer 11.jpg

The following screen is shown saying that the transferred has started.

Transfer 12.jpg

The receiving party will also receive an email will a link to initiate the transfer. Clicking this link the following is shown with the following screens shown.

Transfer 13.jpg

The subscription is now shown as transferred in the sending portal as transferred.

Transfer 14.jpg

The subscription is now showing as active in the receiving portal.Transfer 16.jpg

 

 

 

 

 

 

 

 

 

 

As you can see the service admin is updated too.Transfer 20.png

3PAR StoreServ Zoning Best Practice Guide

This is an excellent guide which has been written by Gareth Hogarth who has recently implemented a 3PAR StoreServ and was concerned about the lack of information from HP in relation to zoning.  Being a ‘stand up guy’ Gareth decided to perform a lot of research and has put together the ‘3PAR StoreServ Zoning Best Practice Guide’ below.

This article focuses on zoning best practices for the StoreServ 7400 (4 node array), but can also be applied to all StoreServ models including the StoreServ 10800 8-node monster.

3PAR StoreServ Zoning Best Practice Guide

Having worked on a few of these, I found that a single document on StoreServ zoning BP doesn’t really exist. There also appears to be conflicting arguments on whether to use Single Initiator – Multiple Target zoning or Single Initiator – Single Target zoning. The information herein can be used as a guideline for all 3PAR supported host presentation types (VMware, Windows, HPUX, Oracle Linux, Solaris etc…).

Disclaimer:  Please note that this is based on my investigation, engaging with HP Storage Architects and Implementation Engineers. Several support cases were opened in order to gain a better understanding of what is & isn’t supported. HP recommendations change all the time, therefore it’s always best to speak with HP or your fabric vendor to ensure you are following latest guidelines or if you need further clarification.

Right, let’s start off with Fabric Connectivity

In terms of host connectivity options the StoreServ 7000 (specifically the 7400) provides us with the following:

  • 4x built-in 8 Gb/s Fibre Channel ports per node pair.
  • Optional 8 Gb/s Quad Port Fibre Channel HBA (Host Bus Adapter) per node (we will be focusing on this configuration option).
  • Optional 10 Gb/s Dual Port FCOE (Fibre Channel over Ethernet) converged network adapter per node.

StoreServ target ports are identified in the following manner Node:Slot:Port.

StoreServ target ports located on the on-board HBA’s will always assume the slot identity of 1, respectively StoreServ targets ports located on the optional expansion slot will always assume the identity of slot 2.

StoreServ nodes are grouped in pairs, it’s important to pay particular attention to this when zoning host initiators (server HBA ports) to the StoreServ Target ports.

StoreServ7000-HostPorts

Recommendations

  • Each HP 3PAR StoreServ node should be connected to two fabric switches.
  • Ports of the same pair of nodes with the same ID (value) should be connected to the same fabric.
  • General rule – odd ports should be connect to fabric 1 and even ports should be connected to fabric 2.

Figure 1a below identifies physical cabling techniques, mitigating against single points of failure using a minimum of two fabric switches, which are separated from each other.

The example below illustrates StoreServ nodes with supplementary quad port HBA’s:

figure 1a_StoreServ_nPcabling

Moving on to Port Persistence

As already covered by Craig in this blog post, a host port would be connected and zoned on the fabric switch via one initiator (host HBA port) to one HP 3PAR StoreServ target port (one-to-one zoning). The pre-designated HP 3PAR StoreServ backup port must be connected to the same fabric as its partner node port.

It is best practice that a given host port sees a single I/O path to HP 3PAR StoreServ. As an option, a backup port can be zoned to the same host port as the primary port, which would result in the host port seeing two I/O paths to the HP 3PAR StoreServ system. This would also result in the configuration where a HP 3PAR StoreServ port can serve as the primary port for a given host port(s) and backup port for host port(s) connected to its partner node port.

Persistent ports leverage SAN fabric NPIV functionality (N_Port ID Virtualization) for transparent migration of a host’s connection, to a predefined partner port on the HP 3PAR StoreServ array during software upgrades or node failure.

One of the ways this is accomplished is by having a predefined host facing port on the 3PAR StoreServ array, so that in the event of upgrade (node shutdown) or node down status the partner port will assume the identity of its partner port. The whole process is transparent to the host. When the node returns to normal I/O is failed back to the original target port.

Although unconfirmed I have heard that in in future releases of Inform OS we will get this level of protection at the port level.

Essentially for this to work Port Persistence requires that corresponding ‘native’ and ‘guest’ StoreServ ports on a node pair, be connected to the same fibre channel fabric.

Requirements for 3PAR Port Persistence:

  • The same host ports on the host facing HBA’s in the nodes in a node pair must be connected to the same fabric switch.
  • The host facing ports must be set to target mode.
  • The host facing ports must be configured for point-to-point connections.
  • The Fibre Channel fabric must support NPIV and have NPIV enabled on the switch ports.

Checking and enabling NPIV

Brocade Fabric OS (ensure you have the appropriate license which enables NPIV)

admin> portcfgshow ‘port#’

If the NPIV capability is enabled, the results of the portcfgshow command will identify this, i.e NPIV capability ON.

If the NPIV capability is not enabled, you can turn it on with the following command:

admin> portCfgNPIVPort ‘port#’ 1   (1 = on, 0 = off)

 Cisco MDS Series Switches

fabSwitch # conf t

fabSwitch(config) # feature npiv (Enables NPIV for all VSANs on the switch)

QLogic SANbox 3800, 5000 and 9000 Switches

Don’t require a license, it’s enabled by default (just ensure you are using firmware version 6.8.0.0.3 or above).

Now let’s cover Switch Zoning (Fibre Channel)

SAN zoning is used to logically group hosts and storage devices together in a physical SAN, so that authorised devices can only communicate with each other if they are in the same SAN zone.

The function of zoning is to:

  • Restrict access so that hosts can only see the data they are authorised to see.
  • Prevent RSCN (Registered State Change Notification) broadcasts.

What are ‘RSCNs’ ? RSCNs are a feature of fabric switches.  It’s a service of the fabric that notifies devices of changes on the state of other attached devices. For example if a device is reset, removed or otherwise undergoes a significant change in status.

These broadcasts are made to all members in the configured SAN zone. As hosts and storage targets can be grouped in a zone its best practice to reduce the impact of these types of broadcasts (Note: an argument against RSCN’s causing issues in zoning tables is that newer HBA’s do a good job limiting the impact of these types of broadcasts).  Nevertheless, I prefer limiting the number of initiators and targets in a fabric zone to a minimum.

Zoning Types

  • Domain, Port zoning uses switch domain id’s and port numbers to define zones.
  • Port World Wide Name or pWWN zoning uses port World Wide Names to define zones. Every port on a HBA has a unique pWWN. (A host HBA comprises of a – nWWN & a pWWN, the nWWN refers to the whole device whereas the pWWN refers to the individual port.

The preferred zoning unit for the 3PAR StoreServ is pWWN. If you are currently using Domain, Port migrating to pWWN is very easy. Simply create new zones based on the pWWN of the host and the pWWN of the storage target, add these new zones to your fabric switches, zoning-out the references to Domain, Port for that respective HBA port. Some fabric vendor’s support mixing both Domain, Port and pWWN in the same zone. I prefer using one or the other explicitly.

The following command outputs the StoreServ ports and partner ports, which can be used to identify the node pWWN’s for zoning.

3PAR01 cli% showport 

HP 3PAR StoreServ supports the following zoning configurations:

  • Single initiator – Single Target per zone (recommended)
  • Single initiator – Multiple Targets per zone

Use Single Initiator – Single Target per zone over Single Initiator Multiple Targets per zone to reduce RSCN’s as previously discussed.

At the time of writing, HP 3PAR OS implementation documentation references Single Initiator Multiple Targets as the recommended zoning type. However, when I queried this I was directed to use Single Initiator – Single Target Zoning.  HP support pointed me in the direction of this document which identifies Single Initiator – Single Target zoning as best practice: http://www8.hp.com/h20195/v2/GetDocument.aspx?docname=4AA4-4545ENW

HP will support Single Initiator – Multiple Target, but you should not have a single host initiator attached to more than two StoreServ target ports!

Host port WWN’s should be zoned in partner pairs. For example if a host is zoned to node port 0:2:1, then it should be zoned to node port 1:2:1 (I’m speculating here, but I guess this is because controller nodes mirror cache I/O, so that in the event of node failure write operations in cache are not lost – hence we zone in node pairs and not across nodes from different pairs).

After you have zoned the host pWWN to the StoreServ node pWWN, you can use the 3PAR CLI showhost command to ensure that each host initiator is zoned to the correct StoreServ target ports (ensuring initiators go to different targets over different fabrics).

Figure 1b represents a staggered approach where you would have odd numbered VMware hosts connecting to nodes 0 & 1, and even numbered hosts connecting to nodes 2 & 3 (Note: currently the StoreServ is designed to tolerate a single node failure only, this includes the 8-node StoreServ 10800 array).

The example depicts Single Initiator – Single Target zoning, so a host with two HBA ports connecting over two fabrics will have a total of four zones (two per fabric). In case you were wondering the maximum allowed is eight (also known as fan-in limitation which is four per fabric).

figure 1b_host_zoning

Here are some additional points to be aware of

 Fan-in/Fan-out ratios:

  • Fan-in refers to a host server port connected to several HP 3PAR storage ports via Fibre Channel switch.
  • Fan-out refers to the HP 3PAR StoreServ storage port that is connected to more than one host HBA port via Fibre Channel switch.

Note: Fan-in over subscription represents the flow of data in terms of client initiator to StoreServ target ports. HP/3PAR documentation states that a maximum of four HP 3PAR storage system ports can fan-in to a single host server port (if you are thinking great, I’ll connect my VMware host to 8 ports [four per fabric] think again.  Using this approach when you have hundreds of hosts can quickly reach the maximum StoreServ port connection limitation which is 64!) it’s just not necessary.

StoreServ Target Port Maximums (As per 3PAR InForm OS 3.1.1 please observe the following):

  • Maximum of 16 hosts initiators per 2Gb HP 3PAR StoreServ Storage Port
  • Maximum of 32 hosts initiators per 4Gb HP 3PAR StoreServ Storage Port
  • Maximum of 32 hosts initiators per 8Gb HP 3PAR StoreServ Storage Port
  • Maximum total of 1,024 host initiators per HP 3PAR StoreServ Storage System

HP documentation states that these recommendations are guidelines, adding more than the recommended hosts should only be attempted, when the total expected workload is calculated and shown not to overrun either the queue depth or throughput of the StoreServ node port.

Note: StoreServ storage ports irrespective of speed, will negotiate at the lowest speed of the supporting fabric switch (keep this in mind when calculating the number of host connections).

The following focuses on changing the target port queue depth on a VMware ESX environment.

The default setting for target port queue depth on the ESX host can be modified to ensure that the total workload of all servers will not overrun the total queue depth of the target HP StoreServ system port. The method endorsed by HP is to limit the queue depth on a per-target basis. This recommendation comes from limiting the number of outstanding commands on a target (HP 3PAR StoreServ system port), per ESX host.

The following values can be set on the HBA running VMware vSphere. These values limit the total number of outstanding commands the operating system routes to one target port:

  • For Emulex HBA target throttle = tgt_queue_depth
  • For Qlogic HBA target throttle = ql2xmaxqdepth
  • For Brocade HBA target throttle = bfa_lun_queue_depth

(Note: for instructions on how to change these values follow VMware KB1267‎, these values are also adjustable on Linux Redhat & Solaris).

The Formula used to calculate these values is as follows:

(3PAR port queue depth [see below]) / (total number of ESX severs attached) = recommended value

The I/O queue depth for each HP 3PAR StoreServ storage system HBA mode is shown below:

Note: The I/O queues are shared among the connected host server HBA ports on a first come first serve basis.

HP 3PAR StoreServ Storage HBA I/O queue depth values
Qlogic 2Gb 497
LSI 2Gb 510
Emulex 4Gb 959
HP 3PAR HBA 4Gb 1638
HP 3PAR HBA 8Gb 3276

Well, hopefully you found the above information useful. Here is a high level summary of what we have discussed:

  • Identify and enable NPIV on your fabric switches (Fibre Channel only feature – NPIV-Port Persistence is not present in iSCSI environments)
  • Use Single Initiator -> Single Target zoning (HP will support Single Initiator – Multiple Target, but you should not have a single host initiator attached to more than two StoreServ target ports).
  • A maximum of four HP 3PAR Storage System ports can fan-in to a single host server port.
  • Zoning should be done using pWWN. You should not use switch port/Domain ID or nWWN.
  • A host (non-hypervisors) should be zoned with a minimum of two ports from the two nodes of the same pair. In addition, the ports from a host zoning should be mirrored across nodes.
  • Hosts need to be zoned to node pairs. For example, zoned to nodes 0 and 1 or to nodes 2 and 3. Hosts should NOT be zoned to non-mirrored nodes such as 0 and 3.
  • When using hypervisors, avoid connecting more than 16 initiators per 4 Gb/s port or more than 32 initiators per 8 Gb/s port.
  • Each HP 3PAR StoreServ system has a maximum number of initiators supported, that depends on the model and configuration.
  • A single HBA zoned with two FC ports will be counted as two initiators. A host with two HBA’s, each zoned with two ports, will count as four initiators.
  • In order to keep the number of initiators below the maximum supported value, use the following recommendations:
    • Hypervisors: four paths maximum.
    • Other hosts (non-hypervisors): two paths to two different nodes of the same port pairs.
  • Hypervisors can be zoned to four different nodes but the hypervisor HBAs must be zoned to the same Host Port on HBAs in the nodes for each Node Pair.

Reference Documents

HP SAN Design Reference Zoning Recommendations

HP 3PAR InForm® OS 3.1.1 Concepts Guide

The HP 3PAR Architecture

HP UX 3PAR Implementation Guide

HP 3PAR Red Hat Enterprise Linux and Oracle Linux Implementation Guide

HP 3PAR VMware ESX Implementation Guide

HP 3PAR StoreServ Storage and VMware vSphere 5 best practices

HP 3PAR Windows Server 2012, Server 2008 Implementation Guide

HP Brocade Secure Zoning Best Practises

HP 3PAR Peer Persistence Whitepaper

An introduction to HP 3PAR StoreServ for the EVA Administrator

Building SANs with Brocade Fabric Switches by Syngress

Virtual Machine Restart Priority

We are all guilty of doing this, we design and install a beautifully crafted vSphere 5 environment following best practises for HA, host isolation responses and we setup our admission control to meet the clients requirements.  When then pass the VMware environment back to the client to manage and maintain themselves.

The client has a hardware failure and the VM’s are restarted on an alternative host, excellent we say.  However the client is far from happy as we didn’t mention or configure ‘virtual machine restart priority’ and they encountered complications as the VM’s came up in the wrong order.

In essence virtual machine restart priority enables selected virtual machines to start before other virtual machines over riding the clusters default settings.  To configure virtual machine restart priority:

– Right Click Cluster
– Edit Settings
– Virtual Machine Options
– Virtual Machine Settings > VM Restart Priority

Lets look at the following scenario.

Scenario A

Client has VMware Standard licensing, which means they don’t have DRS.  They have two Exchange 2010 email servers, one running the CAS/Hub role and the other running Mailbox role.  They reside on the same host as someone thought this would a ‘good idea’.

The physical host fails and it’s a free for all for the VM’s to restart, as a result the CAS/Hub server comes up before the Mailbox server.  As a result Outlook Client connectivity, OWA and Active Sync take longer than anticipated to connect resulting in an extended downtime.

Scenario B

Same client has configured virtual machine restart priority with the following settings:

Mailbox server – High
CAS/Hub server – Medium

The VM’s restart in the right order and the client has less downtime.

Best Practices

Naturally every environment is different, but as a general rule of thumb, I recommend using the following guidelines.

Exchange

– CAS/Hub – High Priority
– Mailbox – Medium Priority

Domain Controllers

– If FSMO role holder – High Priority
– If Global Catalogue – High Priority

SQL

– SQL Server – High Priority
– Applications relying on SQL e.g. BES – Medium Priority

Citrix

– Data Collector – High Priority
– Web Server – Medium Priority
– License Server – Medium Priority
– Farm Members – Low Priority (as you want everything else to be up and running before users login).

High Availabity for vMotion Across Two NIC’s

When designing your vCentre environment, good practice is to associate two physical network adapters (uplinks) to your vMotion network for redundancy.

The question is does VMware use both uplinks in aggregation to give you 2GBps throughput in an Active/Active configuration? the answer to this is no.

In the above configuration we have two uplinks both Active, using the load balancing policy ‘route based on originating virtual port ID’ this means that the VMkernel will use one of the two uplinks for vMotion traffic.  The secondary active adapter will be used but only if the uplink vmnic4 is no longer available.

You might say this is OK, I’m happy with this configuration, I say how can we make it more efficient?

At the moment you will have a single Port Group in your vSwitch which is providing vMotion functionality (in my case it’s also doing Fault Tolerance Logging)

And the vSWitch has two Active Adapters

What we are going to do is rename the Port Group vMotionFT to vMotionFT1, go into the Port Groups properties and change the NIC Teaming setting to the following:

So what have we changed and why? First of all we have over ridden the switch failover order, we have specified that vmnic4 is now unused and that we are not going to ‘failback’ in the event of uplink failure.

You may think hold on Craig, why have you done this now we have no HA for our uplinks, well the next step is going to be adding another Port Group as follows:

VMkernel Select
Network Label vMotionFT2
Use this port group for vMotion Select
Use this port group for Fault Tolerance logging Do Not Select
IP Address 192.168.231.8 255.255.255.0

Once completed, we are now going to edit the Port Group vMotionFT2, go back into NIC Teaming and over ride the switch failover order and set vmnic1 to unused and no for failback.

So what have we achieved?

1. vSwitch1 has two active uplinks
2. vMotionFT1 Port Group is active and uses vmnic1 for vMotion & Fault Tolerance Logging
3. vMotionFT2 Port Group is active and uses vmnic4 for vMotion
4. We can perform two vMotions simultaneously using 1GB of bandwidth each
5. If we have an uplink hardware issue vMotion continues to work

ESXi 5 Host Isolation

What is a ‘host isolation’?

It’s the term that VMware use to define when an ESXi host is no longer able to communicate with specific IP address’s and therefore it is deemed to be isolated from the rest of the cluster.

By default the ESXi hosts default gateway (the VMkernal gateway) is used.  Depending on your infrastructure this is normally a Layer 3 switch, router or firewall.

Whats the problem with that you ask? Well what happens if you have an outage of your Layer 3 switch, firewall or router? Well vCentre will think that your ESXi hosts are isolated and depending on your ‘host isolation response’ perform one of the following actions:

The recommended action for vCentre 5 is to ‘leave powered on’.

We therefore need to provide more external devices for vCentre to communicate with before it invokes a host isolation response. To do this we go into the Cluster Settings > vSphere HA > Advanced Options.

We then add additional IP address’s that we went vCentre to communicate with in the following format:

das.isolationaddress1 10.0.0.1
das.isolationaddress2 192.168.1.1

We then end the range of IP address’s with ‘das.usedefaultisolationaddress’ ‘false’

What IP address’s would I recommend you use in a production environment?

– vMotion/FT switches
– SAN Controller Management IP address’s
– Layer 2 Switch
– Layer 3 Switch
– Firewall