VMTurbo are giving away three chances to go to go to VMworld Europe.
All you need to do is enter for free here and check the three drawing dates when the tickets are being drawn:
- 21st August
- 28th August
- 4th September
Good luck, see you in Barcelona.
This is another quick blog post to help anyone using the Cisco UCS Platform Emulator and is configuring a chassis or blade and runs into the error message ‘slot ID must be a range in the format [a-b]’
The [a-b] is misleading, however you do need to enter details in a specific format for it to be accepted by the Cisco UCS Platform Emulator.
The easiest way to do this is to jump onto Cisco UCS Manager and expand the component you are trying to add and check the properties of the item.
In the above case, I’m trying to add fans to a 5108 Chassis. So I have drilled down to the component and checked the format of the ID and Tray.
The format to enter the fans in is 1-1 or if you want to populate all 8 at once enter 1-8.
To give another example, I want to configure a Cisco UCS B200 M4 Blade and want to add in Intel(R) Xeon(R) E5-2650 v3 2.30 GHz 105W 10C/25MB Cache/DDR4 2133MHz. The format for this is 1-2 which represents the CPU slots.
This is a quick blog post to help anyone who is setting up and configuring Cisco UCS Platform Emulator for the first time and is staring blankly at a screen wondering why they have no ability to launch UCS Manager.
Connecting to the VIP Address of the Cisco UCS Platform Emulator, you are unable to launch UCS Manager.
Cisco UCS Platform Emulator works under the premise of secure and un-secure context and therefore is blocked by FireFox.
Select the ‘shield icon’ in FireFox
Select ‘Disable Protection Now’
Refresh and you can now Launch Cisco UCS Manager
The purpose of this post is to explain my understanding of the different networking options with Azure, it is meant to be an overview and not a deep dive into each area. If you notice any areas which are incorrect, feel free to make a comment and I will update this post.
Endpoints are the most basic configuration offering when it comes to Azure networking. Each virtual machine is externally accessible over the internet using RDP and Remote PowerShell. Port forwarding is used to access the VM. For example 126.96.36.199:6510 resolves to azure.vmfocus.com which is then port forwarded to an internal VM on 10.0.0.1:3389
- Public IP Address (VIP) is mapped to the Cloud Service Name e.g. azure.vmfocus.com
- The port forward can be changed if required and additional services can be opened or the defaults of RDP and Remote PowerShell can be closed
- It is important to note that the public IP is completely open and the only security offered is password authentication into the virtual machine
- Each virtual machine has to have an exclusive port mapped see diagram below
Endpoint Access Control Lists
To provide some mitigation to having virtual machines completely exposed to the internet, you can define an basic access control list (ACL). The ACL is based on source public IP Address with a permit or deny to a virtual machine.
- Maximum of 50 rules per virtual machine
- Processing order is from top down
- Suggested configuration would be to white list on-premises external public IP address
Multiple virtual machines are given the same public port for example 80. Azure load balancing then distributes traffic using round robin.
- Health probes can be used every 15 seconds on a private internal port to ensure the service is running.
- The health probe uses TCP ACK for TCP queries
- The health probe can use HTTP 200 responses for UDP queries
- If either probe fails twice the traffic to the virtual machine stops. However the probe continues to ‘beacon’ the virtual machine and once a response is received it is re-entered into round robin load balancing
Virtual networks enable you to create secure isolated networks within Azure to maintain persistent IP addresses. Used for virtual machines which require static IP Addresses.
- Enables you to extend your trust boundary to federate services whether this is Active Directory Replication using AD Connect or Hybrid Cloud connections
- Can perform internal load balancing using internal virtual networks using the same principle as load balancing endpoints.
This is probably the most interesting part for me, as this provides the connectivity from your on-premises infrastructure to Azure.
Point to Site Point to site uses certificate based authentication to create a VPN tunnel from a client machine to Azure.
- Maximum of 128 client machines per Azure Gateway
- Maximum bandwidth of 80 Mbps
- Data is sent over an encrypted tunnel via certificate authentication on each individual client machine
- No performance commitment from Microsoft (makes sense as they don’t control the internet)
- Once created certificates could be deployed to domain joined client devices using group policy
- Machine authentication not user authentication
Site to Site Site to site sends data over an encrypted IPSec tunnel.
- Requires public IP Address as the source tunnel endpoint and a physical or virtual device that supports IPSec with the following:
- IKE v1 v2
- AES 128 256
- SHA1 SHA2
- Microsoft keep a known compatible device list located here
- Requires manual addition of new virtual networks and on-premises networks
- Again no performance commitment from Microsoft
- Maximum bandwidth of 80 Mpbs
- The gateway roles in Azure have two instances active/passive for redundancy and an SLA of 99.9%
- Can use RRAS if you feel that way inclined to create the IPSec tunnel
- Certain devices have automatic configuration scripts generated in Azure based
Express Route A dedicated route is created either via an exchange provider or a network service provider using a private dedicated network.
- Bandwidth options range from 10 Mbps to 10 Gbps
- Committed bandwidth and SLA of 99.99%
- Predictable network performance
- BGP is the routing protocol used with ‘private peering’
- Not limited to VM traffic also Azure Public Services can be sent across Express Route
- Exchange Providers
- Provide datacenters in which they connect your rack to Azure
- Provide unlimited inbound data transfer as part of the exchange provider package
- Outbound data transfer is included in the monthly exchange provider package but will be limited
- Network Service Provider
- Customers who use MPLS providers such as BT & AT&T can add Azure as another ‘site’ on their MPLS circuit
- Unlimited data transfer in and out of Azure
Traffic Manager is a DNS based load balancer that offer three load balancing algorithms
- Traffic Manager makes the decision on the best route for the client to the service it is trying to access based on hops and latency
- Round Robin
- Alternates between a number of different locations
- Traffic always hits your chosen datacentre unless there is a failover scenario
Traffic Manager relies on mapping your DNS domain to x.trafficmanager.net with a CNAME e.g. vmfocus.com to vmfocustm.trafficmanager.net. Then Cloud Service URL’s are mapped to global datacentres to the Traffic Manager Profile e.g. east.vmfocus.com west.vmfocus.com north.vmfocus.com
HP released two offerings of the HP ConvergedSystem 200-HC StoreVirtual System last year. Essentially they have taken ESXi, HP StoreVirtual VSA, OneView for vCenter and automated the setup process using OneView Instant On.
Two models are available which are:
- HP CS 240-HC StoreVirtual System, this has 4 nodes each with:
- 2 x Intel E5-2640v2 2.2GHz 8 Core Processor
- 128GB RAM
- 2GB Flash Backed HP Smart Array P430 Controller
- 2 x 10GbE Network Connectivity
- 1 x iLO4 Management
- 6 x SAS 1.2TB 10K SFF Hard Drives
- Around 11TB of usable capacity
- HP CS 242-HC StoreVirtual System, this has 4 nodes each with:
- 2 x Intel E5-2648v2 2.2GHz 10 Core Processor
- 256GB RAM
- 2GB Flash Backed HP Smart Array P430 Controller
- 2 x 10GbE Network Connectivity
- 1 x iLO4 Management
- 4 x SAS 1.2TB 10K SFF Hard Drives
- 2 x 400GB Mainstream Endurance SSD
- Around 7.5TB of usable capacity
These are marketed with the ability to provision virtual machines within 30 minutes.
What Does Provision Virtual Machines Within 30 Minutes Really Mean?
To answer this question you need to understand what HP have saved you from doing, which is:
- Installing ESXi across 4 x Hosts
- Installing vCenter to a basic configuration
- Installing HP StoreVitrual VSAE to a basic configuraiton across 4 x Hosts
- Pre-installed Management VM running Windows Server 2012 Standard that has OneView for vCenter and CMC for StoreVirtual Management
So after completing the initial setup, you do have the ability to upload an ISO and start deploying an OS image.
What About The Stuff Which Marketing Don’t Mention? AKA Questions Answered?
- SQL Express is used as the database (local instance on Management VM). I have real concerns around the database if logging levels are increased to troubleshoot issues and/or the customer doesn’t perform an kind of database maintenance
- I’m waiting on confirmation from HP as whether you can migrate the SQL database instance to a full blown version
- Grey area, these can be used. However HP rather you stay with the base configuration of the nodes (much like the networking see below).
- The solution is only supported using Enterprise or Enterprise Plus VMware licenses, with the preference being HP OEM.
- Windows Server 2012 Standard is supplied as the Management VM. Initially, this runs from a local partition and is then Storage vMotioned onto the HP Converged Cluster. Windows licensing dictates that when a OS is moved across hosts using Standard Edition you cannot move the OS back for 90 days or you need to license each node for the potential number of VM’s that could be ran.
- HP have confirmed that you receive 2 x Windows Server 2012 Standard licenses and DRS Groups Manager rules are configured to only allow the Management VM to migrate between these two ESXi Hosts.
- You are able to upgrade the Management Server VM in terms of RAM, CPU and Disk Space and be supported.
- You cannot add additional components to the Management Server VM and be supported e.g. VUM, vCenter SysLog Service
- I’m waiting on confirmation from HP around what is and isn’t supported, I would air on caution and not install anything extra
- The 1GbE connections are not used apart from the initial configuration of the Management Server. My understanding is that these are not supported for any other use.
- HP prefer you to stay with the standard network configuration, this causes me concern. 10GbE Network providing Management, iSCSI, Virtual Machine and vMotion traffic. How do you control vMotion bandwidth usage on a Standard vSwitch? You can’t a Distributed vSwitch is a much better option, but if you need to reconfigure a node, you will need to perform a vSS to vDS migration
- You can upgrade individual components separately, however you must stay within the HP Storage SPOCK for the Converged System 200-HC StoreVirtual (Note a HP Passport login is required)
At the time of this post, the latest supported versions are as follows:
- vSphere 5.5 U2 , no vSphere 6
- vCenter 5.5 U2
- HP StoreVirtual VSA 11.5 or 12.0
- HP OneView for vCenter Storage/Server Modules 7.4.2 or 7.4.4
- HP OneView Instant On 1.0 or 1.0.1
- PowerCLI 5.8 R1
HP have put together a slick product which automates the initial installation of ESXi and gives you a basic configuration of vCenter. What it doesn’t give you is design to say that your workloads are going to be suitable on the environment and or a solution that meets a client requirements.