Lab Manager to vCloud Director Import Script

One major challenge when migrating from Lab Manager to vCloud Director is getting all of your VM Templates, Library entries, and needed configurations migrated across.  When I first looked at this it there was a partially scripted process using perl code that was long and very hard to understand.  I found it was better to do the process manually at that point as I’m not any good with perl and I was still trying to get VCD down.

Once PowerCLI released some VCD cmdlets we revisited the process and through lots of searching, digging through extension data, multiple posts on the communities, etc… we were able to automate the import process.  It was long and ugly, and failed a lot of the time so we didn’t use it.

We have migrated a lot of departments to VCD but still have a few more to go so I thought I’d refresh my script to see if it would work better.  I was thrilled to see the improvements in PowerCLI cmdlets for VCD.  I was able to cut out almost 100 lines of code from the script, and it now works properly, so I thought I’d share a few changes.

First change is thin provisioning the VM.  This script was used to find the datastore with the most free space:

$datastores = get-datastore | select Name,FreeSpaceMB
$LargestFree = "0"
$dsView = $null
foreach ($datastore in $datastores) {
    if ($Datastore.FreeSpaceMB -gt $LargestFree) {
            $LargestFree = $Datastore.FreeSpaceMB
            $dsView = $Datastore.name
            }
        }

Updated code saving eight lines of code:

$dsView = Get-Datastore PD* | Sort FreeSpaceGB -Descending | Select-Object -First 1

Once you get the datastore ($dsView) you want to svMotion to:
Previous script:

Get-VM $vm2Import | % {
            $vmview = $_ | Get-View -Property Name
            $spec = New-Object VMware.Vim.VirtualMachineRelocateSpec
            $spec.datastore =  $dsView.MoRef
            $spec.transform = "flat"
            $vmview.RelocateVM($spec, $null)
        }

With PowerCLI 5.1:

Get-VM $vm2Import | Move-VM -Datastore $dsView -DiskStorageFormat Thin -confirm:$false

As you can see this saved six lines of code. That doesn’t sound like a lot but it all adds up.

One of the major changes is the internal vApp network creation. The vApps all use internal vApp NAT Routed networks.  Using some code from Clint Kitson’s Unofficial VMware vCD Cmdlets the following code was used to create the internal routed network:

$orgNetworkName = "importOrgNetwok"
$orgNetwork = (Get-Org "importOrg").extensiondata.networks.network | where {$_.name -eq $orgNetworkName}
$mynetwork = new-object vmware.vimautomation.cloud.views.vappnetworkconfiguration
$mynetwork.networkName = "vAppInternalNetwork"
$mynetwork.configuration = new-object vmware.vimautomation.cloud.views.networkconfiguration
$mynetwork.configuration.fencemode = "natRouted"
$mynetwork.Configuration.ParentNetwork = New-Object vmware.vimautomation.cloud.views.reference
$mynetwork.Configuration.ParentNetwork.Href = $orgNetwork.href
$mynetwork.Configuration.IpScope = new-object vmware.vimautomation.cloud.views.ipscope
$mynetwork.Configuration.IpScope.gateway = "192.168.10.1"
$mynetwork.Configuration.IpScope.Netmask = "255.255.255.0"
$mynetwork.Configuration.IpScope.Dns1 = "192.168.1.10"
$mynetwork.Configuration.IpScope.ipranges = new-object vmware.vimautomation.cloud.views.ipranges
$mynetwork.Configuration.IpScope.IpRanges.IpRange = new-object vmware.vimautomation.cloud.views.iprange
$mynetwork.Configuration.IpScope.IpRanges.IpRange[0].startaddress = "192.168.10.10"
$mynetwork.Configuration.IpScope.IpRanges.IpRange[0].endaddress = "192.168.10.15"
$mynetwork.Configuration.features += new-object vmware.vimautomation.cloud.views.firewallservice
$mynetwork.Configuration.features[0].isenabled = $false
$networkConfigSection = (Get-Org "importOrg" | get-civapp "importVappName").ExtensionData.GetNetworkConfigSection()
$networkConfigSection.networkconfig += $mynetwork
$networkConfigSection.updateserverdata()

With PowerCLI this was cut down to one line:

New-CIVAppNetwork -VApp $newVappName -ParentOrgNetwork ($orgInto+"-OrgNetwork") -Name 'vAppInternalNetwork' -PrimaryDns '192.168.1.10' -Routed -Gateway '192.168.10.1' -Netmask '255.255.255.0' -DisableFirewall -StaticIPPool "192.168.10.10 - 192.168.10.15"

That’s a 20 line savings!!!

Once a script works I don’t usually revisit it unless there’s and issue with it but these three examples saved 34 lines of code, even more if you add comment lines no longer needed. I may have to go back through some other scripts and see what I can do with them.

Quckly Setup New Nimble Array via CLI

I’ve been very impressed with Nimble Storage since using a CS220 for a project last May and running performance tests against it.  We purchased a CS240G shortly after.  If you haven’t checked out what they’re doing with SSD I highly recommend you do.

We just purchased a CS220G to compliment our CS240G and I decided to set it up via CLI this time, though the GUI is straight forward as well.  This can all be found in the Command Line Reference document on Nimble’s support portal.

When setting up the CS240G it only took 10 minutes to get through the base GUI setup, log into the console, finish all other settings (smtp, notifications, etc…), create chap accounts, and create a couple volumes.  When I was ready to migrate to this array though I needed to create 26 new volumes and wanted to do it quickly.  Looking this process up in the CLI document led me to create the following setup script for the new array.

If you want the full script it can be downloaded here –> NimbleSetupScript

After powering on the unit there is no IP set so connect via the serial cable.  Default account for access is admin/admin at the login prompt.

To change the admin password:

useradmin --passwd changeMe

Basic setup of array.  Each consecutive “–subnet”, ” –subnet_type” call increments to the next ETH/TG port.

setup --name CS220G-01 --domainname localdomain.local --dnsserver dns.localdomain.local --ntpserver pool.ntp.org --timezone America/Chicago --subnet 192.168.1.1/255.255.255.0 --subnet_type management --subnet 192.168.1.1/255.255.255.0 --subnet_type management --subnet 192.168.1.1/255.255.255.0 --subnet_type data --subnet 192.168.1.1/255.255.255.0 --subnet_type data --array_ipaddr 192.168.1.10 --support_ipaddr 192.168.1.11 --support_ipaddr 192.168.1.12 --default_gateway 192.168.1.1 --discovery_ipaddr 192.168.10.13 --data_ipaddr 192.168.10.14 --data_ipaddr 192.168.10.15

Rest of settings

array --edit --autosupport yes --support_tunnel no --proxyserver proxy.localdomain.local --proxyport 1234 --proxyuser User123 --proxypasswd changeMe --smtpserver smtpRelay.localdomain.local --fromeaddr array@localdomain.local --toeaddr storageAdmin@localdomain.local --sendcopy no --snmp_trap_enabled yes --snmp_trap_host snmpServer.localdomain.local --snmp_trap_port 1234

Register with vCenter Server.  ** –serial must be the serial number of the array.

vmwplugin --register --username ssoVCenterAdmin --password ssoVCpassword --server 192.168.1.50 --serial AA-100000

Setup replication (must be done on replication partner as well):

partner --create CS240G-01 --description "vCloud Array 01" --password changeMe --hostname 192.168.10.20

Create a Volume Collection (if doing snapshots or replication) and Snapshot schedule:

volcoll --create VMwareVolumes --app_sync none
volcoll --addsched VMwareVolumes --schedule VMwareVolumesSnapSchedule --repeat 1 --repeat_unit days --retain 12 --snap_verify no

Create Chap accounts:

chapuser --create ChapUser1 --password changeMe

Create volume and associate with volume collection:

vol --create Volume01 --size 819200 --description "VM Volume" --perfpolicy "VMware ESX 5" --chapuser ChapUser1 --multi_initiator yes
vol --addacl Volume01 --apply_acl_to both --chapuser ChapUser2
vol --assoc Volume01 --volcoll VMwareVolumes

One gotcha I came across is special characters in names or passwords.  You must use a ‘ around them.

Since this is done via Linux you can use for loops to create multiple volumes following the same pattern for a smaller script.  Either way this is a lot faster than using the GUI, especially if you do this very often or want to automate creation of new volumes!!!