Archive for the ‘VMware’ Category


Wow, first blog article for quite some time! Time has been stretched over the last year or so with family and new work commitments, so something had to slip! So, hopefully this is the start of me finding more time to blog! I’ve been working on plenty of scripts and other bits and pieces that’ll make some good articles, so fingers crossed they’ll be blogged soon!

I’ve been delving more and more into the world of performance monitoring with relation to VMware vSphere, and CPU Ready times has always been a topic of heated conversation at work… people over commit CPU resource as if it’s free, but don’t realise the consequences.

To prove a point I’ve made an example of an Exchange server. It runs for a business of about 20 users, running Exchange 2010. They also use Lync, and SharePoint, so there’s some integration going on too. It’s a fairly busy machine, and was configured with 4 virtual CPU’s, and a load of RAM (12GB). I’d argued the configuration of machines like this for some time, trying to explain that more CPU’s may mean fewer CPU time for the VM, but it was falling on deaf ears, so, I decided it was time to make a change, and prove a point :)

Now, for a very simple overview…

In case you don’t know how CPU scheduling works, regardless of the number of vCPU’s granted, or their workloads, ALL vCPU’s must be scheduled to run on pCPU’s at the same time, even if the vCPU would be idle. So, if you have 4 pCPU’s, and 3 VM’s with a single pCPU, all is OK, each virtual machine can always get CPU resource, as there will always be 3 CPU’s available. Add in a virtual machine with 2 vCPU’s, and immediately you’d need 5 pCPU’s for all machines to always get pCPU time. Luckily, the VMware scheduler will deal with this and queue pCPU requests. As our new machine will always need time on 2 pCPU’s, it’s “easier” for VMware to schedule pCOU time to the VM’s with 1 vCPU, so they’ll end up getting more CPU time than the 2 vCPU VM. This waiting time, is what’s known as CPU Ready time, and when this get’s too high, you’ll find your VM’s with more vCPU’s will get slower…

Here’s an example:

This is the previously mentioned Exchange server, with 4 vCPU’s. It’s a one hour capture of both CPU Usage, and CPU Ready time:

EX02 4 vCPU

As you can see, CPU ready time was anywhere between 180ms and 1455ms, averaging 565ms. This lead to slow CPU response for the machine.

So, looking at the average CPU usage for a couple of months, it was at ~30%. So that’s 30% of 4 CPU’s.. just over a single CPU. So, 2 vCPU’s needed to be removed… and this is the result:

EX02 with 2 vCPU

So, the result? CPU ready time was between 28ms and 578ms, a vast improvement, and averaged just 86ms, far better than 565ms! CPU usage was higher, but then it’s now using more of the CPU’s it’s granted, so this was to be expected.

Now, CPU Ready time on this machine still isn’t great, but I’ve a lot more VM’s to sort through, reducing vCPU allocation, and hopefully it’ll just get better!


Jun 25

I had the need to automate moving about 50 ISO files from one datastore to another during a storage array migration a short while ago, so I wanted to share this script with you all in case you ever find the need for this or similar.

It’s rather simple, and you just need to edit this with the names of your datastores and folder structure (top folder only):

#Set's Old Datastore
$oldds = get-datastore "Old Datastore Name"

#Set's New Datastore
$newds = get-datastore "New Datastore Name"

#Set's ISO Folder Location
$ISOloc = "Subfolder_Name\"

#Map Drives
new-psdrive -Location $oldds -Name olddrive -PSProvider VimDatastore -Root "\"
new-psdrive -Location $newds -Name newdrive -PSProvider VimDatastore -Root "\"
#Copies Files from Old to New
copy-datastoreitem -recurse -item olddrive:\$ISOloc* newdrive:\$ISOloc

Line 1: Change the script to have the name of the datastore you are moving the files FROM.
Line 5: Change the script to have the name of the datastore you are moving the files TO.
Line 8: Change the script to have the name of your ISO subdirectory. Do not remove the “\” unless you have no subfolder.
Lines 11 & 12: Maps PowerShell drives to those datastores.
Line 14: Copies the files.


Jan 19

Today I needed to find a way to gather a list of the IP’s for all of our VM’s so I came up with this little one-liner, so thought I’d share it with you:

get-vm | select Name,@{N="IP Address";E={@($_.guest.IPAddress[0])}} |
         out-file c:\VM_IP_Addresses.csv

It’ll get all of the VM’s in the environment, and then list out the first IP address for each one. If you have multiple IP’s on some hosts, then remove the “[0]” section in the above and it’ll list all of them. The output will be tab delimited text rather than comma separated.



At last!!! VMware Labs have released a package to add VDS functions into PowerCLI!

It is a Fling though that was only released yesterday, so it’s not going to have any official support from VMware, and currently only supports Windows XP, 2003 and 2008 (no mention of 2008 R2 here). You also need to be running PowerCLI 4.1.1 or later.

You can import the snap-ins like this:

Add-PSSnapin VMware.VimAutomation.VdsComponent

And list the cmdlets like this:

Get-Command –Module VMware.VimAutomation.VdsComponent

You can download them from here:

Vmware Labs PowerCLI VDS Download

And you can get some more information from Virtu-Al.net here:

Virtual-Al.net



Full Error:

File <unspecified filename> is larger than the maximum size supported by datastore ‘<unspecified datastore>

I’ve been coming up against this issue for the last few days whilst installing some backup software for one of our customers. It’s highly frustrating and I couldn’t figure out why this was even happening. The data stores that this particular VM was running on had plenty of free disk space, and none of the disks exceeded the file size maximum for the block size on those disks.

What I didn’t know was, quite simply, that a VM cannot snapshot if it’s configuration file is stored on a data store with a smaller block size than one of it’s virtual hard disks. Now, I presume, that this is only the case if the virtual disk size is larger than the supported size of a file on the configuration files data store.

So, if you come accross this problem, just storage vMotion the configuration file to a data store with a larger block size, or at least to a datastore with the same size block size as your largest virtual disks’ data store. Run another snapshot, and “Hey Presto!” it should snapshot correctly.


vSphere5 goes GA!!

posted by Dan Hayward
Aug 25

At last! After being announced last month, vSphere5 has finally gone GA! Available to download from late last night (UK time) the latest release of VMware’s Datacentre hypervisor comes with over 140 new features, and improves on the existing features from vSphere4.

If you already own vSphere 4 and have a current support agreement, then your licenses will be available to upgrade by the end of the week. Given that a lot of people will probably wait a couple of months before upgrading any production systems, I don’t see this being a problem for most people, and there’s the usual evaluation period in the mean time anyway.

I’ve downloaded my copy, have you got yours? If not, head over here:

http://downloads.vmware.com/d/info/datacenter_cloud_infrastructure/vmware_vsphere/5_0



Another ususal issue I keep finding is the need to see how many VM’s I have that are either Powered On, Powered Off or Suspended. Today I decided it was time to do two things:

1. Write a script for this so I don’t have to work this out “the hard way”.
2. Make it a function with parameters so that it has help information included for other people and can be modular. This was mainly due to Jonathan Medd and his talk at the last LonVMUG.

So, I decided I needed options to limit the scope of the script. Cluster level seemed like a good start to me, and I also added an option to connect to a particular host/vCenter instance (assumes you are running Powershell as a user with sufficient access permissions).

So, I came up with a script. Then made it into a function. And then fixed it after I broke it! I also realised that outputting large lists of VM’s went over the PowerShell console history length if you had enough VM’s listed, so I added an output to file option to relieve this issue.

If you want to be able to get a list of all VM’s dependant on their current power state, take a copy of this script, save it as a “.psm1″ file and import is as a module (import-module). This way you can just run Get-VMByPowerState and you’ll get a full list.

Here’s the function:

function Get-VMByPowerState
{
<#
    .SYNOPSIS
        Gets a list of VM's dependant on power state.
    .DESCRIPTION
        Gets a list of VM's dependant on power state.
    .PARAMETER Cluster
        Name of the cluster to retrieve VM's from. Supports Wildcards.
    .PARAMETER PowerState
        REQUIRED. The power state of the VM's that you are looking for.
		Valid state options are:
			PoweredOn
			PoweredOff
			Suspended
	.PARAMETER Server
		The name or IP of the vCenter or ESX(i) host to connect to. This assumes that you have sufficient access rights as the logged on user.
	.PARAMETER Outfile
		Name & path to an output file.
    .EXAMPLE
        Get-PowerState -Cluster ClusterName -State PoweredOn -Server 127.0.0.1 -outfile filename.txt
    #>
[CmdletBinding(defaultparametersetname='ByName')]
    param(
        [Parameter(Mandatory=$False
        ,   Position = 0
        ,   ParameterSetName='ByName')]
        [Alias("ClusterName")]
        [string]
        $Cluster = '*'
    ,
        [Parameter(Mandatory=$true
        ,   ParameterSetName='ByDatacenter')]
		[Alias("PowerStateEntry")]
        [string]
        $PowerState
	,	
		[Parameter(Mandatory=$false
        ,   ParameterSetName='ByDatacenter')]
		[Alias("vCenterHost")]
        [string]
        $Server = $null
	,	
		[Parameter(Mandatory=$false
        ,   ParameterSetName='ByDatacenter')]
		[Alias("Output File Name")]
        $outfile = $null
    )
Process
   {

if ($server -ne $null) {
	connect-viserver $server
	clear-host
	}

$vms = get-vm | where {$_.PowerState -eq $PowerState} | select Name,PowerState

$vmcount = $vms.count

if ($vmcount -gt "0"){
	if ($outfile -eq $null) {
		""
		write-host "List of all $PowerState VM's:"
		$vms
		""
		write-host "There are $vmcount $PowerState VM's"
	}
	if ($outfile -ne $null){
		""
		write-host "Outputting to file as requested."
		out-file -FilePath $outfile -InputObject $vms}
		}
if ($server -ne $null) {
	disconnect-viserver $server -force:$true -confirm:$false
	}
}
}


Today I found that I needed to know which of my datastores had the most disk space so that I could add a new virtual hard disk on it temporarily for some temporary data upload. Knowing that I needed to exclude certain datastores I had to figure out how to get PowerCLI/PowerShell to check the name of the datastore against an “exclusion list”. So, with some help from @jonhtyler and @leveitan on Twitter, and using the PowerShell operators, I came up with the following script:

#Connects to vCenter/ESX(i)
connect-viserver $servername 

#Get's a list of datastores and excludes based on matches of "Name1" etc. Only gets free space and datastore Name
$datastores = get-datastore | where {$_.Name -notmatch "Name1|Name2|Name3"} | select Name,FreeSpaceMB

#Sets some static info
$LargestFreeSpace = "0"
$LargestDatastore = $null

#Performs the calculation of which datastore has most free space
foreach ($datastore in $datastores) {
	if ($Datastore.FreeSpaceMB -gt $LargestFreeSpace) {	
			$LargestFreeSpace = $Datastore.FreeSpaceMB
			$LargestDatastore = $Datastore.name
			}
		}

#Writes out the result to the PowerShell Console		
write-host "$LargestDatastore is the largest store with $LargestFreeSpace MB Free"

#Disconnects from all connected vCenter/ESX(i) hosts.
disconnect-viserver * -force:$true -confirm:$false

Now, all you need to do is replace $servername with the name of your vCenter server or ESX(i) host, and change “Name1″, “Name2″ and “Name3″ with the expressions that you want to remove. If you know that all of the disks you want to exclude contain a single word such as “LOCAL”, then just replace all 3 (and remove the all the “|”), if you want to exlude “LOCAL” and “TEMP” then you’ll need “LOCAL|TEMP” – with me?

Hope this helps more people than just me :)



Today VMware announced that there were going to be changes made to the new licensing model that was announced for the latest edition of their Hypervisor platform, vSphere 5.

All editions of vSphere 5 are to have larger vRAM entitlements than originally stated, with Enterprise Plus now getting 96GB vRAM per CPU license as well as other editions having their vRAM entitlements increased. Large VM’s will be capped at an entitlement maximum of 96GB (even if your VM has 1TB of RAM). This won’t be available at GA, but will be afterwards, with another tool being created so that users can keep track of vRAM entitlement useage easily in the meantime.

More details can be found here:

http://blogs.vmware.com/rethinkit/

I have to say, that I think what VMware has done here is amazing. They’ve realised they needed to change the licensing model, made a choice, and listened to customer feedback. And after all that, they changed the model so that existing users, and new customers, can take more advantage of their hypervisor based on current trends for VM memory sizing. It’s not often that you see this kind of dedication to customers and keeping them happy, especially from large companies. Hats off to you VMware.

UPDATE:

Also just found out that the ESXi vRAM limit is being increased from 8GB to 32GB – much better!


My 1st VMUG experience!

posted by Dan Hayward
Jul 15

Today I actually managed to get to my first local VMUG meeting (London VMUG). I’d heard some great things about these events and today lived up to my expectations.

There were several vendor led presentations in the morning, from the likes of Arista, Embotics and Vision Solutions, each presenting their product and giving us demo’s as to how they work and fit into Cloud/Virtualisation environments.

First up was Arista, a networking solutions company, with a showcase of their switches and networking infrastructure equipment. With some very impressive technology to look deeper into the virtualised side of the network layer using their application: VM Tracer. This is some impressive kit. It’ll even automatically create VLANs on switch ports when VMware DRS starts moving VM’s to ensure networking isn’t compromised at the remote end. They’ve even got an open source, linux kernel running the switch as a “server” rather than a traditional switch. Definately one to look into when next deploying a large scale VM infrastucture…

Second to the stand was Embotics, a provider of a private cloud management application called V-Commander. This too was a very impressive. With a self service portal, change tracking and lifecycle management included I must say I was very impressed. On top of all of this, the interface was web based, extremely slick, and really did stand out as a very polished and refined product. This even has an option for “expiry” of VM’s, forcing the user to request continued access to the VM, and has cost/chargeback included. Highly impressive, and I made sure to have a chat with them and get a USB stick with a demo install pre-loaded so I can take a deeper look for myself.

After a quick break it was over to Vision Solutions for their Double-Take Availability product. I had some pre-conceptions about this product as I’ve used Double-Take applications in the past, and I wasn’t that impressed with them, but this is a replication product that copies machines with the aid of a “helper” VM to a secondary destination and does seem to be a lot better from the version I used (which to be honest was about 4-5 years ago). It can also perform all sorts of migrations (P2V, V2V & V2P) to aid in virtualization migration projects.  Although the interface wasn’t all that great, it was a vast improvement from the consoles I remember seeing, and this product may well be of use for migrations and for geographically diverse replication requirements. It can perform continuous replication, and can also have it’s bandwidth restricted in order to deal with slow WAN links, at the sacrifice of continual replication. Still, looks like a good product, though the interface needs some work, and I really don’t understand why it isn’t web based yet.

After a nice lunch break, with food provided by the VMUG team it was on to presentations from fellow vGeeks. There were two tracks to choose from, though I admit I was skipping between the two.

The first presentation I attended was an update with the new features of vSphere 5. Some VERY impressive changes are on their way, including VMFS-5 allowing larger than 2TB datastores (though VM’s are still limited to 2TB disks for now) and vSphere 5 introduces a pre-built Linux based vCenter appliance, making deployment standard. The “traditional” vCenter service is still available, and the appliance will only support Oracle for an external database source, but ships with an internal PostgreSQL database capable of managing several hosts and a few hundred VM’s. Also introduced is the new Web Client, primarily created for managing VM’s. It’s got a cut down feature set from the full vSphere client application, but should do for performing basic tasks.
Another good release is the vSphere Storage Appliance… I’m really interested in seeing this in action. It’s going to take local storage in each ESXi host, and allow you to use it as shared storage, so you don’t need an expensive SAN solution in place. It’ll also replicate this data accross two ESXi hosts so that you have redundancy and can easily perform maintenance on hosts without affecting VM’s. It sounds great, and it’ll certainly help SMB’s enter the virtualisation space, opening more opportunites for resellers.
There are a lot more changes in vSphere 5 that I won’t delve into here, but I will mention that you can have a VM with 1TB of RAM now… just bear in mind how many CPU licenses you’ll need to run it based on the new vRAM based licensing model…!!!

The second presentation I skipped in favour of taking a look at vCenter Operations Manager. This is in essence a monitoring tool for VMware environments, licensed on a per-VM basis. It’ll monitor hosts as well as VM’s and provide root-cause diagnosis to show you exactly where the problem in your environment lies. Unfortunately, due to issues with my laptop I spent much of the lab trying to get the View client installed, and didn’t manage to get a decent look at this, though from what I was shown it does look like an awesome product, with a comprehensive yet intuitive interface. I’ll have to look at this in further detail when I get 5 minutes as I think it could be really useful for my client base.

The final presentation was discussing PowerCLI and helping you to complete tasks sooner using automation. This was held by one of the authors of the PowerCLI Reference book, Jonathan Medd. Having only ever spoken to Jonathan over Twitter (which began when I won the first PowerCLI Book competition), it was great to finally meet him. He’s helped me several times with some PowerCLI script issues, so it was also good to be able to thank him in person. His presentation showed how to create powershell functions, and then how to create modules filled with them. This all made sense to me… having written plenty of PowerCLI and PowerShell scripts using the same code in many of them, using functions suddenly made sense, as I could then just call these directly in. Adding them all into a module file, meant it’s even easier to gain access to multiple functions, just by importing one module, saving more time and code per script. He also showed some basic help points for those not too familiar including the “get-help” cmdlet that will give you the comment based help from any cmdlet in PowerShell, including the “-examples” switch which simply outputs example uses of a cmdlet. Overall, a great presentation, filled with laughs, and one, now very famous quote on Twitter at least “If you can pee, then you can Powershell!”.

Following the day with a final 10 minute discussion about VMware’s new licensing model, it does seem like the community is split in two… some people are OK, and don’t have a problem when it comes to their upgrade, but others are going to need a massive increase in CPU licenses just to cover systems that they already own, let alone any future expansion. Whether VMware will change the licensing model before final release to smooth out these issues, or whether they’ll force customers wanting to upgrade to purchase additional licenses remains to be seen.

And of course… after all that was a trip to the local pub for some vBeers, wish was very much enjoyed by all!

Overall, a great experience, and I really hope to make more of these sessions in the future. They’re well worth while attending if you use VMware products, finding complimentary products and extending your knowledge, and it’s a fab networking opportunity to boot.

Finally, I want to thank the LonVMUG committee again for organising the events from today. If it wasn’t for these volunteers, and of course the vendors, these events simply wouldn’t happen, and it’s fantastic to see a community making such an effort to help each other and promote a product that we all love. I’ll be trying to get some “odd” pictures of the #LonVMUG beer mats soon :)


%d bloggers like this: