2014 vExpert Applications

Applications for the 2014 vExpert awards are now being accepted.  Most people know what this is but if you don’t it is recognition by VMware for the work people have done within the VMware community.  This could be blogging, presenting, training, coming up with new ways to use their technology.

A few people have complained that the award is watered down now that the number of recipients have grown to over 500 and it’s not worth applying for.  I disagree with this for multiple reasons.  The VMware portfolio has grown and the recipient list should grow with it.  Also, a lot of people didn’t even know about the program until recent years, myself included.  I had no idea what it was even though I was doing enough to earn the award already.  I finally found out what it was once I got more involved with the Kansas City VMware User Group.  All of the leaders were vExperts and a couple others who attended were as well.  I finally had to ask and decided to apply.

Receiving the award isn’t about the benefits, it’s about what you do for the community.  The benefits don’t hurt though, especially if you have a home lab and no “access” to license keys for it.  Yeah, there are cool glasses, Raspberry Pi’s, shirts, etc… provided by hardware and software vendors but getting license keys to run the latest releases in my lab is so helpful.  You also get the opportunity to see some of what will be released and sometimes even participate in a beta release.

For all of you who think you can’t be a vExpert, you never will if you don’t apply.

Read more about the process and apply here.


VCD & Storage Clusters with Fast Provisioning Enabled

There are many different design options available when deploying vCloud Director, which makes it both flexible and confusing at the same time.  One topic I wanted to touch on is configuring storage for an Organization with Fast Provisioning enabled.  In setting up an environment for software development the provisioning speed and repetitive build requirements made fast provisioning a must.  While testing multiple setups some issues arose with each setup but one was a clear winner.

More background on system/process requirements:

–  1 Organization
–  1 Org. Virtual Datacenter with Fast Provisioning & Thing Provisioning enabled
–  VCD 5.1.1
–  vCenter 5.1.0 (shared with other non-VCD clusters)
–  4 ESXi 5.1 Hosts
–  4 datastores on same array
–  2 vApp Templates – each are chain length 1 and on different datastores.
–  PowerCLI script used to deploy 15 vApps from each vApp Template

One main concept to remember is automated continuous improvement builds/software tests are running multiple times each day based off of these parent templates. Some builds get captured back to catalogs for a couple days when done and others are deleted right away.  The goal is to balance storage usage and performance with time to deploy as well as eliminating as much infrastructure administration as possible.

Setup #1 – *(Any) storage profile, no storage cluster.  This is the “out-of-box” setup and works well if you only have one compute cluster and all hosts use all datastores.

Pros:  When importing or creating vApps VCD places VM’s on datastores as it sees fit and does a pretty good job.  Fast-provisioned vApps use the same datastore as the parent VMDK by default and will create shadow-copies when running out of space.

Cons:  When this isn’t the only cluster within your vCenter VCD will see and monitor all other datastores, even ones not seen by the cluster in use (including ESXi local datastores).  It will send alerts on datastore usage thresholds for them as well.

Setup #2 – Two storage profiles,  one storage cluster per profile.  The thought was to separate storage clusters and profiles to specific hosts within cluster and only license part of the cluster for MS Datacenter 2012, in turn saving lots of MS licensing costs.

Pros:  Storage DRS does a great job of load balancing VM placement upon creation. It also is VERY handy when evacuating a datastore.  Multiple storage profiles allows you to place VM’s within a vApp on different datastores.  This helps reduce software costs as you could place Windows VM’s on one set of hosts and Linux VM’s on another set so you don’t have to buy MS Datacenter licenses for all hosts within the cluster.

Cons:  All VM’s get deployed to the DEFAULT storage profile within the Organization, no matter which storage profile the parent is on.  A shadow-copy VM is created for this to happen, which takes much longer than the standard linked-clone does.  Also, with storage clusters a parent VM gets a shadow-copy to all datastores in the cluster as it gets used more often. This is due to the placement algorithm of SDRS.  This is great for non linked-clone VM’s but defeats the purpose of the Fast Provisioning feature.  We tried to script this but had issues and a lot of VCD UI users wouldn’t know how to follow this properly.

VM disk layout for this setup is just like that of setup #3 below. You can see screenshots from VMware lctree fling of this below.

Setup #3 – One storage profile, one storage cluster.

Pros:  Same as setup #2 but all datastores are presented to all hosts and in the same storage cluster.

Cons:  As time goes by a Shadow VM is created on every datastore within the cluster (other than the datastore where the parent resides) for each vApp Template.

Here are the datastore layouts after 15 copies of the vApp Template based on the first datastore are created.  Notice the Shadow VM’s on datastore 02 and 03.

1SP-1SC_01 1SP-1SC_02 1SP-1SC_03

After creating 15 copies of the vApp Template on datastore 02 you now have Shadow VM’s for it on datastore 01 and 03.

1SP-1SC_11 1SP-1SC_12 1SP-1SC_13

Looking at the vApp Templates within the Catalogs view it lists 2 Shadow VM’s for each vApp Template.


Setup #4 – One storage profile, no storage cluster.

Pros:  When importing or creating vApps VCD places VM’s on datastores as it sees fit and does a pretty good job.  Fast-provisioned vApps use the same datastore as the parent VMDK by default and will create shadow-copies when running out of space.

Cons:  Can’t use SDRS or change the Storage Profile of a VM to evacuate a datastore.

This is the desired layout once 15 vApps are deployed from each vApp Template.

1SP-noSC_01 1SP-noSC_02

The last setup seems to work best for this environment.  Within Lab Manager I would put the Library entry on multiple datastores, check all datastores to see which one has the most free space and use the corresponding entry for that build.  With setup #4 VCD will take care of this issue for me on new build process.  I can still run out of space with Fast-Provisioning though if I don’t have alerts setup.

If you have any questions about this please ask them below.  Again, this setup is way different based on needs than most VCD environments are.

Set vApp Templates to Never Expire within Organizations with VAT Storage Leases

In migrating from Lab Manager to vCloud Director one systems administration feature loss (of many) is no longer being able to set lease times for longer periods than the policy states.  This is used when system administrators create library entries that shouldn’t expire but any library entry a standard user creates will take on the default policy.  vCloud Director allows for different lease times for different Organizations but you cannot change lease times beyond that of the Organizations policy via the GUI or any API’s, even as a system administrator.

I did some digging in the DB and found differences in the “vm_container” table for vApp Templates with lease times set and those that are set to never expire.  The values for “auto_delete_date” and “auto_delete_ticks” are both NULL for Templates set to never expire.  I manually changed the DB for a vApp Template within an Organization that had a maximum lease time set to see if it would get mad, and it did not.  The template stayed function-able as long as it was there.

I wrote the following DB query to find all vApp Templates, and some other info to distinguish them for quicker search:

Select VMC.name as vAppTemplate, ausr.username as UserID, org.display_name as Organization, VMC.auto_delete_date as DateExpire
FROM vm_container as VMC
INNER JOIN usr as ausr on VMC.user_id = ausr.user_id
INNER JOIN  organization as org on VMC.org_id = org.org_id
WHERE VMC.sg_type = 2
Order by VMC.name

With hundreds of vApp Templates in an environment it would be nice to have a way to set these quickly without digging through the DB each time.  There had to be a better/quicker way to do this.  I first set out to create a webpage displaying the info above.  I added a form so I can filter by organization and if they were already set to never expire.  I then changed the table so I can select a checkbox for each template I wanted to set to never expire.


Clicking the “Set to Never Expire” button loads another page that updates the DB, queries the DB again for updated info of just the templates that were modified to display changes to the DB as a confirmation page.


I searched the web to find pieces and parts to do most of this.  The page uses PHP, jquery, tablesorter, and java-script.  I’m not the best web developer so it can probably be condensed/cleaned up but if you copy the following package and update the DB connection information it should work for you as well.  Let me know if you have any questions.

Tips When Choosing a Cloud Provider

I just finished a short-term project where I VM’s hosted in a VCD backed cloud and ran into a few issues during the process that might help others when choosing a cloud provider.

I started the project out by calculating resource needs. I then talked with numerous vendors about their clouds and if it would work with our process needs. All of them were interested in our project and wanted to help but I soon realized not all vendors are created equal.

After selecting a vendor I discovered some topics that needed to be talked about before a vendor was selected. Here’s some questions/features which need to be communicated with your potential cloud provider before signing any contracts.

  1. Base VM’s – How VM’s will be created within vApps. Use templates from vendor or build your own. Uploading media can count against your network bandwidth quota.
  2. VM Resources – You may not need as much CPU/Memory if not all is needed or guaranteed.  Performance test in-house with different reservation and resource pool limits.
  3. Application/Workflow – In our instance we needed Fast Provisioning enabled. Provider did this but didn’t restrict LUNs to 8 hosts and they were on vSphere 5.0.  Once we had 9-10 instances running we couldn’t power on any more. Had to ship hardware overnight in case vendor couldn’t get this working for us.
  4. Length of contract – make sure you have enough time (and network bandwidth) to get any VM’s or data off before your contract expires. I was close to not getting a couple of things back I really needed.
  5. Thin Provisioning – wasn’t needed in our environment as we really needed linked-clones (Fast Provisioning) but might be able to use this to save disk space.
  6. Normal update outage times – In our case the provider did updates to VCD Thursday nights. I didn’t pay attention to this and was testing my application during this time. The test failed and it took me hours to realize the issue.

There are other things to pay attention to, and I’m sure you can find a ton of posts on them, but these are observations from my journey to the cloud for a specific project.

Lab Manager to vCloud Director Import Script

One major challenge when migrating from Lab Manager to vCloud Director is getting all of your VM Templates, Library entries, and needed configurations migrated across.  When I first looked at this it there was a partially scripted process using perl code that was long and very hard to understand.  I found it was better to do the process manually at that point as I’m not any good with perl and I was still trying to get VCD down.

Once PowerCLI released some VCD cmdlets we revisited the process and through lots of searching, digging through extension data, multiple posts on the communities, etc… we were able to automate the import process.  It was long and ugly, and failed a lot of the time so we didn’t use it.

We have migrated a lot of departments to VCD but still have a few more to go so I thought I’d refresh my script to see if it would work better.  I was thrilled to see the improvements in PowerCLI cmdlets for VCD.  I was able to cut out almost 100 lines of code from the script, and it now works properly, so I thought I’d share a few changes.

First change is thin provisioning the VM.  This script was used to find the datastore with the most free space:

$datastores = get-datastore | select Name,FreeSpaceMB
$LargestFree = "0"
$dsView = $null
foreach ($datastore in $datastores) {
    if ($Datastore.FreeSpaceMB -gt $LargestFree) {
            $LargestFree = $Datastore.FreeSpaceMB
            $dsView = $Datastore.name

Updated code saving eight lines of code:

$dsView = Get-Datastore PD* | Sort FreeSpaceGB -Descending | Select-Object -First 1

Once you get the datastore ($dsView) you want to svMotion to:
Previous script:

Get-VM $vm2Import | % {
            $vmview = $_ | Get-View -Property Name
            $spec = New-Object VMware.Vim.VirtualMachineRelocateSpec
            $spec.datastore =  $dsView.MoRef
            $spec.transform = "flat"
            $vmview.RelocateVM($spec, $null)

With PowerCLI 5.1:

Get-VM $vm2Import | Move-VM -Datastore $dsView -DiskStorageFormat Thin -confirm:$false

As you can see this saved six lines of code. That doesn’t sound like a lot but it all adds up.

One of the major changes is the internal vApp network creation. The vApps all use internal vApp NAT Routed networks.  Using some code from Clint Kitson’s Unofficial VMware vCD Cmdlets the following code was used to create the internal routed network:

$orgNetworkName = "importOrgNetwok"
$orgNetwork = (Get-Org "importOrg").extensiondata.networks.network | where {$_.name -eq $orgNetworkName}
$mynetwork = new-object vmware.vimautomation.cloud.views.vappnetworkconfiguration
$mynetwork.networkName = "vAppInternalNetwork"
$mynetwork.configuration = new-object vmware.vimautomation.cloud.views.networkconfiguration
$mynetwork.configuration.fencemode = "natRouted"
$mynetwork.Configuration.ParentNetwork = New-Object vmware.vimautomation.cloud.views.reference
$mynetwork.Configuration.ParentNetwork.Href = $orgNetwork.href
$mynetwork.Configuration.IpScope = new-object vmware.vimautomation.cloud.views.ipscope
$mynetwork.Configuration.IpScope.gateway = ""
$mynetwork.Configuration.IpScope.Netmask = ""
$mynetwork.Configuration.IpScope.Dns1 = ""
$mynetwork.Configuration.IpScope.ipranges = new-object vmware.vimautomation.cloud.views.ipranges
$mynetwork.Configuration.IpScope.IpRanges.IpRange = new-object vmware.vimautomation.cloud.views.iprange
$mynetwork.Configuration.IpScope.IpRanges.IpRange[0].startaddress = ""
$mynetwork.Configuration.IpScope.IpRanges.IpRange[0].endaddress = ""
$mynetwork.Configuration.features += new-object vmware.vimautomation.cloud.views.firewallservice
$mynetwork.Configuration.features[0].isenabled = $false
$networkConfigSection = (Get-Org "importOrg" | get-civapp "importVappName").ExtensionData.GetNetworkConfigSection()
$networkConfigSection.networkconfig += $mynetwork

With PowerCLI this was cut down to one line:

New-CIVAppNetwork -VApp $newVappName -ParentOrgNetwork ($orgInto+"-OrgNetwork") -Name 'vAppInternalNetwork' -PrimaryDns '' -Routed -Gateway '' -Netmask '' -DisableFirewall -StaticIPPool " -"

That’s a 20 line savings!!!

Once a script works I don’t usually revisit it unless there’s and issue with it but these three examples saved 34 lines of code, even more if you add comment lines no longer needed. I may have to go back through some other scripts and see what I can do with them.

vCD Expired VM Cleanup Fails

While renewing a vApp from Expired Items in an Organization I decided to cleanup a couple vApps which expired over 4 months ago. I deleted them and didn’t think much of them until I was working in vCenter an hour later.  I noticed recurring tasks trying to power off the VM’s that belonged to these vApps.

Looking at the VM’s they were suspended.  For this to happen the owners ignored the running lease time at which time the VM’s were suspended and the vApps stopped.  When the storage lease time expired the vApp went to Expired Items but the VM was still suspended.

Now that I deleted the vApp the process to remove it from vCenter must only check if it is not powered on.   Since it is suspended the power off task fails and vCD keeps trying this process. All I had to do to correct this was power on the VM and in a couple minutes vCD powered it off and cleaned everything up.

I was thinking I could write a PowerCLI script or vCO workflow to check for this and fix the VM’s.  You wouldn’t want to fix all “Suspended” VM’s as a user might want to suspend them on their own.

“Fence” vApps or use Internal vApp Networks

Over a year ago we started looking at vCloud Director as it was slated to replace Lab Manager.  The beauty of Lab Manager for us was we could clone the same VM’s thousands of times and have them “fenced” from each other for software development, training, automated testing, and supporting clients. We could do this very rapidly and save a lot of disk usage costs with Linked-Clones as well.

We ran into an issue with some of our software packages that didn’t like the IP address or hostname changing during the guest customization process though.  Another issue was the confusion created by the VM using IP’s from the Physical Network pool and and the virtual routers using the same pool for NAT’d access to the VM’s.  We worked around all of this by using Network Templates for the VM’s and connecting those to the Physical Networks upon deployment.  This way the VM’s have the same IP (192.168.x.x) and MAC address and they are still “fenced” from others VM’s on the external network (10.x.x.x).

It took awhile to figure this out in vCloud Director when we were first testing it.  After a couple days of reading posts and numerous tests I figured out how to mimic this setup.  I thought I’d share as I’ve answered this question numerous times on the communities pages.

Whether modifying a current vApp or manually creating a new one you’ll select “Add Network” to create a new network for this vApp.


You can use the default network if you wish and just add DNS information or you can change the settings to your liking.


Name the new network and finish it’s settings.


The network setup will now look like this. You have the option to automatically use IP’s from the pool you created, manually assign and IP from the subnet, or use DHCP from the vShield system. Click Next when done as we still need to connect this internal network to an Organization Network.


Click “None” in the connection column and select the proper Organization Network. Decide if you want to use NAT only or add firewall settings as well.  You can also have this vApp keep the same IP it gets from the Organization Network if you’d like. We don’t do this so IP’s are not reserved when not in use.


At this point we can capture this vApp to a catalog but there’s a major gotcha.  In order for the internal network to be saved you must select “Make identical copy” on the capture screen. When someone adds this vApp Template to their cloud they won’t have to go through this process.  They may have to change the connection between the internal network and organization network depending on their needs and if deployed across organizations.


vApp Template not capturing NIC’s “IP POOL” address setting

*** UPDATE ***   The fix has been added to the 5.1.2 release that is now GA.  Read the release notes for other fixes here… https://www.vmware.com/support/vcd/doc/rel_notes_vcloud_director_512.html

While setting up and testing a couple new vApp Templates for a new organization I noticed an issues when capturing to the catalog and using IP Pool for IP addressing of the VM’s.  vApps deployed from the new vApp Templates where not pulling an address from our IP Pool setup within our External Network.  I messed around with the VM inside the vApp I used to capture the vApp Template from.  I thought it had something to do with the new UDEV stuff in RHEL 6.  It looked correct so I tested by converting it to a Template within vCenter and doing a Guest Customization deployment from it.  The VM was working correctly.  We are currently on version

I finally looked closer at the vApp Template and noticed the NIC for the VM was set to DHCP instead of IP Pool.


I checked some vApp Templates I’d captured while on vCD 1.5.0 and they were OK.  I checked another from when we were on 1.5.1 and they were set to DHCP so something changed at that point.  We don’t use these templates very often and weren’t on vCD 1.5.1 very long so no one complained.

Doing a quick search I came across this post on the communities page.  There’s a SQL query to find the NIC for the VM inside the vApp Template based on Catalog and vApp Template name.  I had 11 templates to update so I modified it to use variables for the catalog and template name.

DECLARE @Catalog varchar(80)
DECLARE @vAppTemplate varchar(80)

SET @Catalog = 'catalogName'
SET @vAppTemplate = 'templateName' -- May need % at the end in case vApp Template was copied/moved. May get multiple returns though so be careful.

select nic.nic_id, nic.mac_address, nic.ip_addressing_mode from network_interface as nic inner join vapp_vm as vm on nic.netvm_id = vm.nvm_id where svm_id=(select svm_id from vapp_vm where vapp_id=(select entity_id from catalog_item where name like @vAppTemplate and catalog_id=(select id from catalog where name = @Catalog)))

When running the above query you should get the following output:


Change the last line of the code to the following and run again.

update network_interface set ip_addressing_mode = 1 where nic_id = (select nic.nic_id from network_interface as nic inner join vapp_vm as vm on nic.netvm_id = vm.nvm_id where svm_id=(select svm_id from vapp_vm where vapp_id=(select entity_id from catalog_item where name like @vAppTemplate and catalog_id=(select id from catalog where name = @Catalog))))

It should update 1 row.  If you run the original query again and look at the NIC properties of the VM within the vApp you will now see the following:



Deployments will now pull from IP Pool as intended.

Per to communities post above there is a hotfix coming out soon for those who submit an SR. Otherwise it will be fixed in the next maintenance release.

Powered by WordPress.com.

Up ↑