I wanted to spend a little bit of time going over some 3PAR concepts, as this blog post won’t make a huge amount of sense without knowing them.
The basis for any storage system is some physical disks which provide hard disk capacity. These physical disks and then placed into an enclosure (cage) and are sub divided into Chunklets. Chunklets are used to breakdown each physical disk into 1GB portions. So a 146GB hard drive get’s broken down into 146 Chunklets.
The Chunklets then form the basis of the Logical Disk. The Logical Disk is created from Chunklet’s from different physical disks. The Logical Disks are then pooled together to create a Common Provisioning Group (CPG). It’s at the CPG level where you set your RAID type which is either:
- RAID 0 (this is explicitly locked out unless you enable it)
- RAID 1
- RAID 5 (explicitly locked out on NL drives unless enabled)
- RAID 6
Virtual Volumes are then created which draw space from the CPG, and placed back into the CPG, if the Virtual Volume was thin provisioned by using tools such as sdelete at the Windows level or run vmkfstools -y 60 on an ESXi Host.
Sometimes a picture speaks a thousand words.
With 3PAR StoreServ 7000 you have two availability options the first being High Availability Drive, this is the cheaper configuration option as you are protecting yourself from drive failure. The other choice is High Availability Enclosure which strips the chunklets across enclosures, so that you are protected from enclosure failure, in the same way as StoreVirtual Replicated RAID 10.
Depending on your build the HA Enclosure option isn’t always massively expensive especially if you are starting with the same drive type. An example of this, was when I was building a StoreServ 7200 config, which had the following requirements:
- 10TB usable space
- 7,200 IOPS
To achieve this, I used a basic config which consisted off 48 x 300GB 15K SAS HDD which gave 7,940 IOPS and 9.9TB of usable space. Now the interesting thing with the configuration was that it was only 19% more expensive to use HA Enclosure.
Now we have covered off the above, I feel that we a re now in a position to cover Tunesys.
So we now know that 3PAR StoreServ works on Chunklets which are striped across physical disks to make Logical Disks, but what happens if a disk fails or you loose an enclose? how are the Chunklets re introduced?
This is where tunesys comes in, it essentially re balances an entire 3PAR StoreServ with a single command (Dynamic Optimization licenses are required). There are three types of tunesys which are.
Phase 1 – tunevv this rebalances inter node when a new enclosure with disks is added
- Create a new Logical Disks.
- Region moves are started to enable new Virtual Volumes to Logical Disks.
- The old virtual volumes is blocked.
- Regions are switched and the Virtual Volume is now mapped to the new Logical Disk
- Block is removed on the virtual vole
- Original Logical Disk is deleted.
Phase 2 – tuneodech this is when new disks are added to an existing enclosure paid. Tuning is performed per disk type e.g NL, SAS, SSD.
Phase 3 – tuneld this re-layout’s the CPG if it differs from the Chunklets on the existing logical disks.
The good news is tunesys does not interfer with AO. A few things to note with tunesys
- No administration is required after starting
- Can perform a dry run to see what the tuning will do to the current configuration
- Default settings should be fine for nearly all systems
- If you add more enclosures or disks, only newly created Virtual Volumes will use the new capacity/IOPS
- Tunesys can take along time to run
- IO pauses are common during some phases
To start tunesys you can use the command
tunesys -nodepct % -chunkpct % -Diskpct %
Nodepct % default is 3%
Chunkpct % default is 5%
Diskpct % default is 10%
From the 3PAR Inform go to Provisioning > Select Tune System
How does tunesys work? Well what it does is pretty straight forward really. First of all tunesys calculates the percentage utilization for each disk type per node. It that checks the average utilization across all nodes. If any of the nodes are more than 3% out (default) then each Virtual Volumes is checked to see if it is well balanced across nodes. If it isn’t then tunesys does it’s magic and rebalances.
Thanks to Sheldon Smith a HP Technical Consultant who pointed out a couple of extra items.