Upgrading Cache in Nimble Array

One great feature of Nimble arrays is being able to upgrade resources without down time.  In one of our arrays the cache hit ratio was constantly below 80% and Nimble’s Infosight statistics showed we needed a minimum of 10% more cache than we currently had to keep up with all activity.  Fortunately we budgeted for this upgrade to make the unit a CS220G-X2 array (upgrading from 320 GB of SSD to 640).

The upgrade process is very simple.  Once the upgrade kit has been purchased and arrives on site the drives need to be replaced one at a time.  I’m usually full speed ahead on stuff like this but I had my reservations in the process.  I know all of the data held in cache is written to the SATA drives as well but there’s always a part of me thinking the system will come crashing down.

While watching the console of the array I removed the SSD drive from slot 7 and replaced it with a new 160 GB drive.  After refreshing the Management/Array page a couple times, and not seeing the new drive, I took a look at the Events page.  It took just over 2 minutes for the system to see the new drive and then display “The CS-Model has changed. The model is now CS220G-X2.”  The Array page now showed the correct drive in slot 7 and the additional 80 GB of cache for the system.  I waited 3 minutes to replace the drive in slot 8 and it showed up within a minute on the Array page.

In doing research on Nimble arrays and passing their SE Certification exam I knew data went to cache only on reads/writes and only does a pre-fetch of blocks needed to complete current read requests.  This means the new drives didn’t automatically “rebuild” or add the data back to them that was on the old drives.  This also means there’s a substantial performance hit if all 4 drives are replaced within a short period of time.  I browsed to the performance tab to see how bad it was.  The first drive was replaced at 3:00 PM (which is a slow time for this array).  You can see how much data had to be pulled from SATA (lower cache hit means more data pulled from SATA).  I took the following screen shots to show the next 5 hours and then later in the morning when the cache hit ratio finally went up.

CacheHit01 CacheHit02 CacheHit03

After seeing the very low levels, and watching activity lights on each drive, I decided to wait until the next morning to replace the last two drives.  This gave a couple disk intensive operations that run overnight a chance of being in cache and anything not would be added to the new drives (hopefully).  I then replaced the last two drives the next morning.  I again saw the same cache hit characteristics as above.  Also, since these drives are not in any form of RAID with each other they don’t have to be replaced at the same time.

Depending on the load of the system it can take a few days for the cache hit ratio to stay above the wanted 80% threshold.

Quckly Setup New Nimble Array via CLI

I’ve been very impressed with Nimble Storage since using a CS220 for a project last May and running performance tests against it.  We purchased a CS240G shortly after.  If you haven’t checked out what they’re doing with SSD I highly recommend you do.

We just purchased a CS220G to compliment our CS240G and I decided to set it up via CLI this time, though the GUI is straight forward as well.  This can all be found in the Command Line Reference document on Nimble’s support portal.

When setting up the CS240G it only took 10 minutes to get through the base GUI setup, log into the console, finish all other settings (smtp, notifications, etc…), create chap accounts, and create a couple volumes.  When I was ready to migrate to this array though I needed to create 26 new volumes and wanted to do it quickly.  Looking this process up in the CLI document led me to create the following setup script for the new array.

If you want the full script it can be downloaded here –> NimbleSetupScript

After powering on the unit there is no IP set so connect via the serial cable.  Default account for access is admin/admin at the login prompt.

To change the admin password:

useradmin --passwd changeMe

Basic setup of array.  Each consecutive “–subnet”, ” –subnet_type” call increments to the next ETH/TG port.

setup --name CS220G-01 --domainname localdomain.local --dnsserver dns.localdomain.local --ntpserver pool.ntp.org --timezone America/Chicago --subnet 192.168.1.1/255.255.255.0 --subnet_type management --subnet 192.168.1.1/255.255.255.0 --subnet_type management --subnet 192.168.1.1/255.255.255.0 --subnet_type data --subnet 192.168.1.1/255.255.255.0 --subnet_type data --array_ipaddr 192.168.1.10 --support_ipaddr 192.168.1.11 --support_ipaddr 192.168.1.12 --default_gateway 192.168.1.1 --discovery_ipaddr 192.168.10.13 --data_ipaddr 192.168.10.14 --data_ipaddr 192.168.10.15

Rest of settings

array --edit --autosupport yes --support_tunnel no --proxyserver proxy.localdomain.local --proxyport 1234 --proxyuser User123 --proxypasswd changeMe --smtpserver smtpRelay.localdomain.local --fromeaddr array@localdomain.local --toeaddr storageAdmin@localdomain.local --sendcopy no --snmp_trap_enabled yes --snmp_trap_host snmpServer.localdomain.local --snmp_trap_port 1234

Register with vCenter Server.  ** –serial must be the serial number of the array.

vmwplugin --register --username ssoVCenterAdmin --password ssoVCpassword --server 192.168.1.50 --serial AA-100000

Setup replication (must be done on replication partner as well):

partner --create CS240G-01 --description "vCloud Array 01" --password changeMe --hostname 192.168.10.20

Create a Volume Collection (if doing snapshots or replication) and Snapshot schedule:

volcoll --create VMwareVolumes --app_sync none
volcoll --addsched VMwareVolumes --schedule VMwareVolumesSnapSchedule --repeat 1 --repeat_unit days --retain 12 --snap_verify no

Create Chap accounts:

chapuser --create ChapUser1 --password changeMe

Create volume and associate with volume collection:

vol --create Volume01 --size 819200 --description "VM Volume" --perfpolicy "VMware ESX 5" --chapuser ChapUser1 --multi_initiator yes
vol --addacl Volume01 --apply_acl_to both --chapuser ChapUser2
vol --assoc Volume01 --volcoll VMwareVolumes

One gotcha I came across is special characters in names or passwords.  You must use a ‘ around them.

Since this is done via Linux you can use for loops to create multiple volumes following the same pattern for a smaller script.  Either way this is a lot faster than using the GUI, especially if you do this very often or want to automate creation of new volumes!!!

Powered by WordPress.com.

Up ↑