This is a rather self indulgent blog post, as I was recently asked to contribute to a technet news article on VMware patching.
Have a read it might give you some tips.
This is a rather self indulgent blog post, as I was recently asked to contribute to a technet news article on VMware patching.
Have a read it might give you some tips.
NIC Teaming
The load balancing on NIC teams in ESXi is based on number of connections (think Round Robin) on Windows Server 2003/2008.
Load balancing only occurs on outbound connections i.e. traffic from VM > vSwitch > LAN
ESXi has a number of load balancing options which are:
Route Based on the Originating Virtual Port ID
ESXi runs an algorithm to evenly balance the number of connections across multiple uplinks e.g. 10 virtual machines residing on one vSwith which contains two uplinks would means that each uplink has 5 virtual machines using it.
Once a VM is assigned to an uplink by the VMkernel it continues to use this until it is vMotioned to another ESXi Host or a uplink failure occurs.
Route based on the originating virtual port ID is the default setting in ESXi.
Route Based on Source MAC Hash
This is much like ‘route based on originating virtual port ID’ as the MAC address of the VM’s do not change and therefore they will continue to use the same connection path over and over. The only way around this is to have multiple virtual NICs (vNIC’s) within the VM which will produce multiple MAC addresses.
Route Based on IP Hash
This uses the source IP and destination IP to create an Hash. So on each new connection a different uplink path would be taken. Naturally, if you are transferring large amount of data, the same path would be used until the transfer had finished.
When enabling ‘Route Based on IP Hash’ you will get an information bubble:
You need to ensure that all uplinks are connected to the same physical switch and that all port groups on the same vSwitch are configured to use ‘route based on IP hash’.
Use Explicit Failover Order
This isn’t really load balancing as the secondary active (vmnic1) uplink will only come into play if vmnic4 fails.
If you have an active and standby adapter, the same procedure applies.
On all Load Balancing policies it is set by default to notify switches, what does this actually mean? Well it means that the physical switches learn that:
– A vMotion occurs
– A MAC address is changed
– A NIC team failover or failback has occurred
– A VM is powered on
Virtual Switch Security
Virtual switch security has three different elements which are:
– Promiscuous Mode, this is where the vSwitch and/or Port Group can see traffic which is not for itself.
– MAC Address Changes, this is where the vSwitch and/or Port Group is interested if the incoming traffic into the vSwitch/Port Group has been altered.
– Forged Transmits, this is where the vSwitch and/or Port Group is interested if the outgoing traffic into the vSwitch/Port Group has been altered.
In all of the above configurations you have a choice to either Reject or Accept the traffic.
VMware recommends that all of these are set to reject.
However if you are using Network Load Balancing or devices with ‘Virtual IP Address’s’ such as Hardware Load Balancers often use an algorithm that produces a shared MAC Address which is different from the original source or destination MAC address and therefore can cause traffic not to pass.
If in doubt you can always turn all three to reject, however I would recommend letting the Server Team know first!
Read more here
VMkernel Ports
I mentioned in ESXi Networking Part 1 that the VMkernel network carries traffic for:
– vMotion
– iSCSI
– NFS
– Fault Tolerance Logging
VMkernel ports require an IP address, you can have more than one VMkernel network if you feel this level of redundancy is appropriate in your network. Or you could have one VMkernel network for Management Traffic, Fault Tolerance Logging and vMotion (however I would recommend against this).
VM Ports
Virtual Machine port groups are quite different to VMKernel Ports as they do not require an IP address or an uplink (physical NIC) to work. They work in exactly the same was an unmanaged physical switch, you plug it in and off you go!
VLAN
Using VLAN’s within ESXi generally is a must unless you have an abundance of physical NIC’s (the limit is 32 per ESXi Host). VLAN’s provide secure traffic segmentation and reduce broadcast traffic across networks.
We can have multiple Port Groups per uplink if required. When configuring VLAN’s these can be performed in one of three ways:
– VM Port Group, when adding a new port group you can specify the VLAN ID in the properties of the port group (most common).
– Physical Switch, you can ‘untag’ the uplink that the VM Port Group resides on which forces it into the VLAN ID specified on the physical switch (common).
– Virtual Guest Tagging, this is when the virtual machine is responsible for VLAN tagging. From an ESXi perspective you need to use VLAN ID 4095 (uncommon).
The uplink that is connected to the physical switch must be configured as a ‘trunk port’ to enable the switch port to carry traffic from multiple VLAN’s at the same time.
Below is an example Standard vSwitch0, from my home LAB, this has one uplink and has three different VLAN’s in play.
VLAN 1 which is the default VLAN and is used by the VMKernel for Management Network purposes and also my Server2012 RC.
VLAN 2 holds my nested ESXi Hosts and vCentre Virtual Appliance.
VLAN 3 holds my iSCSI Storage Area Networks.
NIC Teaming
NIC teaming is used to connect multiple uplinks to a single vSwitch commonly for redundancy and load balancing purposes.
I have seen many NIC teams created with no thought for redundancy on the network card.
Incorrect NIC Teaming
In this configuration we have no resilience for network card failure.
Correct NIC Teaming
In this configuration we have resilience for network card failure.
One of the items that becomes apparent when using VMware is that you need to have a strong understanding of routing and switching.
This blog post is a bit self indulgent as I’m preparing for the VCP 5 exam, I thought it would be good for me to put together a few posts on the achitecture of the switches.
All of the switches within ESXi are software based and operate within the VMkernel. They are called virtual switches (vSwitches) and are Layer 2 devices, which are capable of trunking and passing VLAN traffic. A common myth is that vSwitches can trunk ports together using 802.1q. vSwitches do not use Spanning Tree Protocol as one vSwitch cannot be connected to another vSwitch.
Standard Switch (vSwitch)
These are created when we first install ESXi onto our server hardware. By default this is called vSwitch0 and contains 120 visible Ports (actually holds 128 Ports, 8 are reserverd by the VMkernel), the first virtual machine ‘port group’ called VMNetwork and a Management Network which is used by the VMKernel.
Distributed Switch (dvSwitch)
These are standard switches which are logically grouped across all ESXi hosts who share a common distributed switch configuration. These are only available with Enterprise Plus licenses.
Port Groups
These reside within a vSwitch. Port groups contain two different configurations:
– VMkernal Ports allow vMotion, Fault Tolerant Logging, iSCSI NAS, NFS traffic between ESXi hosts as well as allowing management of the ESXi host it resides on.
– VM Ports allow a virtual machines to access other virtual machines or network based resources.
The key thing to remember is that with Port Groups they must be named exactly the same across all ESXi hosts to allow traffic to flow.
Note, it is possible to have a vSwitch without any Port Groups, however this would be like having a physical switch without any physical ports!
Uplinks (pNIC)
An uplink if the physical network adapter that the vSwitch is connected too. Without this the virtual machines that reside on the vSwitch would be isolated and unable to communicate with the rest of the network.
In the picture below we have a Standard Switch called vSwitch1 whose physical uplink (pNIC) is vmnic4. It contains two different port groups, one for vMotion and Fault Tolerant Logging and the other for VM’s on VLAN29.
Even though we have two different port groups, it is important to remember that each port group is a boundary for communications, broadcasts and security policys.