Posts Tagged ‘Vmware’


Wow, first blog article for quite some time! Time has been stretched over the last year or so with family and new work commitments, so something had to slip! So, hopefully this is the start of me finding more time to blog! I’ve been working on plenty of scripts and other bits and pieces that’ll make some good articles, so fingers crossed they’ll be blogged soon!

I’ve been delving more and more into the world of performance monitoring with relation to VMware vSphere, and CPU Ready times has always been a topic of heated conversation at work… people over commit CPU resource as if it’s free, but don’t realise the consequences.

To prove a point I’ve made an example of an Exchange server. It runs for a business of about 20 users, running Exchange 2010. They also use Lync, and SharePoint, so there’s some integration going on too. It’s a fairly busy machine, and was configured with 4 virtual CPU’s, and a load of RAM (12GB). I’d argued the configuration of machines like this for some time, trying to explain that more CPU’s may mean fewer CPU time for the VM, but it was falling on deaf ears, so, I decided it was time to make a change, and prove a point :)

Now, for a very simple overview…

In case you don’t know how CPU scheduling works, regardless of the number of vCPU’s granted, or their workloads, ALL vCPU’s must be scheduled to run on pCPU’s at the same time, even if the vCPU would be idle. So, if you have 4 pCPU’s, and 3 VM’s with a single pCPU, all is OK, each virtual machine can always get CPU resource, as there will always be 3 CPU’s available. Add in a virtual machine with 2 vCPU’s, and immediately you’d need 5 pCPU’s for all machines to always get pCPU time. Luckily, the VMware scheduler will deal with this and queue pCPU requests. As our new machine will always need time on 2 pCPU’s, it’s “easier” for VMware to schedule pCOU time to the VM’s with 1 vCPU, so they’ll end up getting more CPU time than the 2 vCPU VM. This waiting time, is what’s known as CPU Ready time, and when this get’s too high, you’ll find your VM’s with more vCPU’s will get slower…

Here’s an example:

This is the previously mentioned Exchange server, with 4 vCPU’s. It’s a one hour capture of both CPU Usage, and CPU Ready time:

EX02 4 vCPU

As you can see, CPU ready time was anywhere between 180ms and 1455ms, averaging 565ms. This lead to slow CPU response for the machine.

So, looking at the average CPU usage for a couple of months, it was at ~30%. So that’s 30% of 4 CPU’s.. just over a single CPU. So, 2 vCPU’s needed to be removed… and this is the result:

EX02 with 2 vCPU

So, the result? CPU ready time was between 28ms and 578ms, a vast improvement, and averaged just 86ms, far better than 565ms! CPU usage was higher, but then it’s now using more of the CPU’s it’s granted, so this was to be expected.

Now, CPU Ready time on this machine still isn’t great, but I’ve a lot more VM’s to sort through, reducing vCPU allocation, and hopefully it’ll just get better!


Jun 25

I had the need to automate moving about 50 ISO files from one datastore to another during a storage array migration a short while ago, so I wanted to share this script with you all in case you ever find the need for this or similar.

It’s rather simple, and you just need to edit this with the names of your datastores and folder structure (top folder only):

#Set's Old Datastore
$oldds = get-datastore "Old Datastore Name"

#Set's New Datastore
$newds = get-datastore "New Datastore Name"

#Set's ISO Folder Location
$ISOloc = "Subfolder_Name\"

#Map Drives
new-psdrive -Location $oldds -Name olddrive -PSProvider VimDatastore -Root "\"
new-psdrive -Location $newds -Name newdrive -PSProvider VimDatastore -Root "\"
#Copies Files from Old to New
copy-datastoreitem -recurse -item olddrive:\$ISOloc* newdrive:\$ISOloc

Line 1: Change the script to have the name of the datastore you are moving the files FROM.
Line 5: Change the script to have the name of the datastore you are moving the files TO.
Line 8: Change the script to have the name of your ISO subdirectory. Do not remove the “\” unless you have no subfolder.
Lines 11 & 12: Maps PowerShell drives to those datastores.
Line 14: Copies the files.


Jan 19

Today I needed to find a way to gather a list of the IP’s for all of our VM’s so I came up with this little one-liner, so thought I’d share it with you:

get-vm | select Name,@{N="IP Address";E={@($_.guest.IPAddress[0])}} |
         out-file c:\VM_IP_Addresses.csv

It’ll get all of the VM’s in the environment, and then list out the first IP address for each one. If you have multiple IP’s on some hosts, then remove the “[0]” section in the above and it’ll list all of them. The output will be tab delimited text rather than comma separated.



At last!!! VMware Labs have released a package to add VDS functions into PowerCLI!

It is a Fling though that was only released yesterday, so it’s not going to have any official support from VMware, and currently only supports Windows XP, 2003 and 2008 (no mention of 2008 R2 here). You also need to be running PowerCLI 4.1.1 or later.

You can import the snap-ins like this:

Add-PSSnapin VMware.VimAutomation.VdsComponent

And list the cmdlets like this:

Get-Command –Module VMware.VimAutomation.VdsComponent

You can download them from here:

Vmware Labs PowerCLI VDS Download

And you can get some more information from Virtu-Al.net here:

Virtual-Al.net



Full Error:

File <unspecified filename> is larger than the maximum size supported by datastore ‘<unspecified datastore>

I’ve been coming up against this issue for the last few days whilst installing some backup software for one of our customers. It’s highly frustrating and I couldn’t figure out why this was even happening. The data stores that this particular VM was running on had plenty of free disk space, and none of the disks exceeded the file size maximum for the block size on those disks.

What I didn’t know was, quite simply, that a VM cannot snapshot if it’s configuration file is stored on a data store with a smaller block size than one of it’s virtual hard disks. Now, I presume, that this is only the case if the virtual disk size is larger than the supported size of a file on the configuration files data store.

So, if you come accross this problem, just storage vMotion the configuration file to a data store with a larger block size, or at least to a datastore with the same size block size as your largest virtual disks’ data store. Run another snapshot, and “Hey Presto!” it should snapshot correctly.


vSphere5 goes GA!!

posted by Dan Hayward
Aug 25

At last! After being announced last month, vSphere5 has finally gone GA! Available to download from late last night (UK time) the latest release of VMware’s Datacentre hypervisor comes with over 140 new features, and improves on the existing features from vSphere4.

If you already own vSphere 4 and have a current support agreement, then your licenses will be available to upgrade by the end of the week. Given that a lot of people will probably wait a couple of months before upgrading any production systems, I don’t see this being a problem for most people, and there’s the usual evaluation period in the mean time anyway.

I’ve downloaded my copy, have you got yours? If not, head over here:

http://downloads.vmware.com/d/info/datacenter_cloud_infrastructure/vmware_vsphere/5_0



Today VMware announced that there were going to be changes made to the new licensing model that was announced for the latest edition of their Hypervisor platform, vSphere 5.

All editions of vSphere 5 are to have larger vRAM entitlements than originally stated, with Enterprise Plus now getting 96GB vRAM per CPU license as well as other editions having their vRAM entitlements increased. Large VM’s will be capped at an entitlement maximum of 96GB (even if your VM has 1TB of RAM). This won’t be available at GA, but will be afterwards, with another tool being created so that users can keep track of vRAM entitlement useage easily in the meantime.

More details can be found here:

http://blogs.vmware.com/rethinkit/

I have to say, that I think what VMware has done here is amazing. They’ve realised they needed to change the licensing model, made a choice, and listened to customer feedback. And after all that, they changed the model so that existing users, and new customers, can take more advantage of their hypervisor based on current trends for VM memory sizing. It’s not often that you see this kind of dedication to customers and keeping them happy, especially from large companies. Hats off to you VMware.

UPDATE:

Also just found out that the ESXi vRAM limit is being increased from 8GB to 32GB – much better!


My 1st VMUG experience!

posted by Dan Hayward
Jul 15

Today I actually managed to get to my first local VMUG meeting (London VMUG). I’d heard some great things about these events and today lived up to my expectations.

There were several vendor led presentations in the morning, from the likes of Arista, Embotics and Vision Solutions, each presenting their product and giving us demo’s as to how they work and fit into Cloud/Virtualisation environments.

First up was Arista, a networking solutions company, with a showcase of their switches and networking infrastructure equipment. With some very impressive technology to look deeper into the virtualised side of the network layer using their application: VM Tracer. This is some impressive kit. It’ll even automatically create VLANs on switch ports when VMware DRS starts moving VM’s to ensure networking isn’t compromised at the remote end. They’ve even got an open source, linux kernel running the switch as a “server” rather than a traditional switch. Definately one to look into when next deploying a large scale VM infrastucture…

Second to the stand was Embotics, a provider of a private cloud management application called V-Commander. This too was a very impressive. With a self service portal, change tracking and lifecycle management included I must say I was very impressed. On top of all of this, the interface was web based, extremely slick, and really did stand out as a very polished and refined product. This even has an option for “expiry” of VM’s, forcing the user to request continued access to the VM, and has cost/chargeback included. Highly impressive, and I made sure to have a chat with them and get a USB stick with a demo install pre-loaded so I can take a deeper look for myself.

After a quick break it was over to Vision Solutions for their Double-Take Availability product. I had some pre-conceptions about this product as I’ve used Double-Take applications in the past, and I wasn’t that impressed with them, but this is a replication product that copies machines with the aid of a “helper” VM to a secondary destination and does seem to be a lot better from the version I used (which to be honest was about 4-5 years ago). It can also perform all sorts of migrations (P2V, V2V & V2P) to aid in virtualization migration projects.  Although the interface wasn’t all that great, it was a vast improvement from the consoles I remember seeing, and this product may well be of use for migrations and for geographically diverse replication requirements. It can perform continuous replication, and can also have it’s bandwidth restricted in order to deal with slow WAN links, at the sacrifice of continual replication. Still, looks like a good product, though the interface needs some work, and I really don’t understand why it isn’t web based yet.

After a nice lunch break, with food provided by the VMUG team it was on to presentations from fellow vGeeks. There were two tracks to choose from, though I admit I was skipping between the two.

The first presentation I attended was an update with the new features of vSphere 5. Some VERY impressive changes are on their way, including VMFS-5 allowing larger than 2TB datastores (though VM’s are still limited to 2TB disks for now) and vSphere 5 introduces a pre-built Linux based vCenter appliance, making deployment standard. The “traditional” vCenter service is still available, and the appliance will only support Oracle for an external database source, but ships with an internal PostgreSQL database capable of managing several hosts and a few hundred VM’s. Also introduced is the new Web Client, primarily created for managing VM’s. It’s got a cut down feature set from the full vSphere client application, but should do for performing basic tasks.
Another good release is the vSphere Storage Appliance… I’m really interested in seeing this in action. It’s going to take local storage in each ESXi host, and allow you to use it as shared storage, so you don’t need an expensive SAN solution in place. It’ll also replicate this data accross two ESXi hosts so that you have redundancy and can easily perform maintenance on hosts without affecting VM’s. It sounds great, and it’ll certainly help SMB’s enter the virtualisation space, opening more opportunites for resellers.
There are a lot more changes in vSphere 5 that I won’t delve into here, but I will mention that you can have a VM with 1TB of RAM now… just bear in mind how many CPU licenses you’ll need to run it based on the new vRAM based licensing model…!!!

The second presentation I skipped in favour of taking a look at vCenter Operations Manager. This is in essence a monitoring tool for VMware environments, licensed on a per-VM basis. It’ll monitor hosts as well as VM’s and provide root-cause diagnosis to show you exactly where the problem in your environment lies. Unfortunately, due to issues with my laptop I spent much of the lab trying to get the View client installed, and didn’t manage to get a decent look at this, though from what I was shown it does look like an awesome product, with a comprehensive yet intuitive interface. I’ll have to look at this in further detail when I get 5 minutes as I think it could be really useful for my client base.

The final presentation was discussing PowerCLI and helping you to complete tasks sooner using automation. This was held by one of the authors of the PowerCLI Reference book, Jonathan Medd. Having only ever spoken to Jonathan over Twitter (which began when I won the first PowerCLI Book competition), it was great to finally meet him. He’s helped me several times with some PowerCLI script issues, so it was also good to be able to thank him in person. His presentation showed how to create powershell functions, and then how to create modules filled with them. This all made sense to me… having written plenty of PowerCLI and PowerShell scripts using the same code in many of them, using functions suddenly made sense, as I could then just call these directly in. Adding them all into a module file, meant it’s even easier to gain access to multiple functions, just by importing one module, saving more time and code per script. He also showed some basic help points for those not too familiar including the “get-help” cmdlet that will give you the comment based help from any cmdlet in PowerShell, including the “-examples” switch which simply outputs example uses of a cmdlet. Overall, a great presentation, filled with laughs, and one, now very famous quote on Twitter at least “If you can pee, then you can Powershell!”.

Following the day with a final 10 minute discussion about VMware’s new licensing model, it does seem like the community is split in two… some people are OK, and don’t have a problem when it comes to their upgrade, but others are going to need a massive increase in CPU licenses just to cover systems that they already own, let alone any future expansion. Whether VMware will change the licensing model before final release to smooth out these issues, or whether they’ll force customers wanting to upgrade to purchase additional licenses remains to be seen.

And of course… after all that was a trip to the local pub for some vBeers, wish was very much enjoyed by all!

Overall, a great experience, and I really hope to make more of these sessions in the future. They’re well worth while attending if you use VMware products, finding complimentary products and extending your knowledge, and it’s a fab networking opportunity to boot.

Finally, I want to thank the LonVMUG committee again for organising the events from today. If it wasn’t for these volunteers, and of course the vendors, these events simply wouldn’t happen, and it’s fantastic to see a community making such an effort to help each other and promote a product that we all love. I’ll be trying to get some “odd” pictures of the #LonVMUG beer mats soon :)



Hopefully you’ve all read Part One of this series, where I provide examples of gathering information from vCenter mainly for VM’s in order to recreate your environment from scratch, just in case you have a major vCenter database corruption or the like. If you have, sorry part two has taken so long!
Part Two will show how to export information regarding your ESX(i) hosts, including networking information, so that this part of your setup is also easy to recreate. I should note here, that I’ll be trying to export VSS information, as well as Service Console and VM Kernel port configuration, and get this all exported into CSV files.
So… Here goes…!
Exporting physcial NIC info for the vDS switch
This is a pretty simple script that uses the get-vmhostpnic function from the Distributed Switch module in I mentioned in part one (Thanks again Luc Dekens :¬)).
import-module distributedswitch

write-host "Getting vDS pNIC Info"

$vdshostfilename = "C:\vdshostinfo.csv"
$pnics = get-cluster "<em>ClusterName</em>" | get-vmhost | get-vmhostpnic
foreach ($pnic in $pnics) {
if ($pnic.Switch -eq "<em>dVS-Name</em>") {
$strpnic = $strpnic + $pnic.pnic + "," + $pnic.VMhost + "," + $pnic.Switch + "`n"
}
}
#Writes to CSV file
out-file -filepath $vdshostfilename -inputobject $strpnic -encoding ASCII

Simply change “ClusterName” to match that of your cluster, and change “dVS-Name” to match that of your dVS (vDS – whichever). Then the info exported will contain the physical nic info for your distributed switch.

Next it’s time for simply getting a list of hosts in the cluster, I know, it’s nothing major, but at least it’s in a CSV I can import later, and it makes life much easier!!!

$cluster="ClusterName"
$hostfilename = "c:\filename.csv"
write-host "Getting Host List"
$hosts = get-cluster $cluster | get-vmhost
foreach ($vmhost in $hosts) {
$outhost = $outhost + $vmhost.Name + "`n"
}

out-file -filepath $hostfilename -inputobject $outhost -encoding ASCII

Simply put, gather a list of hosts in the cluster called “ClusterName” and output their names to “c:\filename.csv”

OK, so now that we have that info, all I need to gather is a list of Standard Switches and their port groups, including IP information to make life easy… So, here goes:

$vssoutfile = "vssoutfile.csv"
$cluster = "Cluster Name"
$vmhosts = get-cluster $cluster | get-vmhost

$vssout = "Host Name, VSS Name, VSS Pnic, VSS PG" + "`n"
foreach ($vmhost in $vmhosts) {
$vmhostname = $vmhost.name
$switches = get-virtualswitch $vmhost
foreach ($switch in $switches) {
$vssname = $switch.name
$Nic = $switch.nic
$pgs = get-virtualportgroup -virtualswitch $switch
foreach ($pg in $pgs) {
$pgname = $pg.name
$vssout = $vssout + "$vmhostname" + "," + `
        "$vssname" + "," + "$Nic" + "," + `
        "$pgName" + "`n"
}
}
}

out-file -filepath $vssoutfile -inputobject $vssout -encoding ASCII
Now we just need the host IP’s. At the moment, I can find this info for VM Kernel ports on ESX hosts, but I can get service console information, and the vmkernel IP in ESXi hosts (it’s pulled from the same PowerCLI script, so that’s this one here:

$hostipoutfile = "hostip.csv"
$cluster = "Cluster Name"
$output = "Host Name" + "," + "IP Addresses" + "`n"

$vmhosts = get-cluster $cluster | get-vmhost
foreach ($vmhost in $vmhosts) {
$vmhostname = $vmhost.name
$ips = Get-VMHost $vmhostname | `
     Select @{N="ConsoleIP";E={(Get-VMHostNetwork $_).VirtualNic | `
     ForEach{$_.IP}}}
$ipaddrs = $ips.ConsoleIP
$output = $output + "$vmhostname" + "," + "$ipaddrs" + "`n"
}

out-file -filepath $hostipoutfile -inputobject $output -encoding ASCII

Now, I’m slowly working on this project in my spare time at work (it’s actually for work but not as important as everything else I’m doing!), so part 3 is probably going to be some time away, and that’ll show you how to import all this info back into vCenter to reconfigure your hosts… bear with me, I’ll get this written :)


Jul 13

After yesterday’s announcement on vSphere 5 there’s been a lot of controversy regarding the changes to the licensing program, some of which is confusing, and some actually makes a lot more sense with current hardware availability in modern server architecture. People have been claiming that VMware are simply “taxing” you for memory usage, but I thought I’d better have my two pence worth, and try to clear things up a little.

VMware have release this to help people get around the new versions and licensing model – I’d take a read of this if I were you!

VMware vSphere 5.0: Licensing, Pricing and Packaging

Firstly, I have to commend VMware on what they’ve done with vSphere 5 licensing. From a design perspective, it was becoming difficult to purchase servers with CPU’s that would fit the vSphere 4 licensing model, and not have too many cores per socket, as Intel and AMD have released some fantastic new microprocessors in recent times. So, they’ve seen that they needed to change this model, and they have. Well done.

Now, the confusing part of the new license model is the talk of “vRAM”. vRAM is essentially the amount of virtual RAM assigned to your (powered on) VM’s. With vSphere 5 licensing you now have what’s called your “vRAM entitlement” or “vRAM Capacity”. This is a very simple concept. For each CPU license of vSphere you buy, you are entitled to a certain maximum amount of vRAM. This varies depending on which version of vSphere you purchase. For example, Enterprise Plus has 48GB per CPU license.

Now, “Used vRAM” is calculated by adding together all of the RAM from all of your “Powered On” VM’s. So, it doesn’t include vRAM assigned to templates, or that assigned to powered off, or paused VM’s.

Simple? I thought so to.

OK, so used vRAM and vRAM capacity seems easy enough, there’s a few notes here though for when you start to cluster your servers…

vCenter will “pool” all vRAM entitlements per license type. So, you can’t mix and match licenses within a pool. It’s all Enterprise Plus, or all Enterprise etc.

Linked vCenter instances will create a single giant pool.

vRAM doesn’t have to be used on a particular host. For example, if I have 2 physical hosts with 2 CPU’s each, both with 128GB RAM and Enterprise Plus licenses (4 CPU licenses) I have 192GB vRAM entitlement, but 256GB physical RAM. OK, so you’re thinking you can’t use 64GB RAM, right? Well, officially, no. BUT, as I’m sure everyone does when building VMware/virtualized environment, we plan for failure…

Let’s say you have those two hosts above. All is running OK, and one day, a server goes offline for some catastrophic reason… great… that’s all you need. Now, HA will start up all your VM’s on the remaining host. Great! Hardly any downtime! On the plus side, you’re also still completely legitimately licensed. You’re vRAM pool is 192GB. Whether this is used across 4 hosts with single CPU’s in each, or 2 hosts with dual CPU’s, it doesn’t matter, it’s part of the pool. It’s not 48GB per CPU per server, set in stone, and can’t be moved.

Also, let’s look at this another way. We all know that the main killers of virtualization are storage, networking, CPU and memory. We all know that too many VM’s per CPU core can cause CPU contention, and slow down performance – not good. So, even with Enterprise licenses, I’ve got a 32GB vRAM entitlement per CPU, ok, so, same as above, 2 CPU’s per server, 2 server (4CPU’s). That’s 128GB of vRAM in the pool available for you to use.

If I create a bunch of VM’s with 2GB RAM each, that’s 64 VM’s maximum. In vSphere 4 with Enterprise edition, you can have 6 core’s per CPU. So, let’s say you have quad core CPU’s. That’s 16 cores in the 2 physical servers, or, that’s 4 vCPU’s per pCPU Core. Sounds good, and it’s going to perform pretty well.

Let’s say that each of those 64 VM’s has 2 vCPU’s. That means I’m now at 8vCPU’s per pCPU core. Not as good, but still going to perform to an average standard probably. But as this, and memory requirements change, things are going to be very different.

As Virtual-SMP has been increased, this is going to get higher and higher, and whilst OS and application requirements increase over time, we’ll need to add more vCPU’s to these units. As such, CPU contention will become an issue, and you’ll have 2 choices… scale-out, or scale-up. With the new licensing model, scale-up is now more limited, at least it is if you can’t add more physical CPU’s to your host. This isn’t necessarily a bad thing though. It provides more fail over capacity, requires less wasted memory across the cluster, and gives DRS more options when it comes to migrating VM’s, all increasing value and performance of your machines.

I’m sure that there are some instances where the maths will work out that the new licensing model doesn’t work out well, but at the moment, the only way I can get to this is if you have lots of “large” VM’s using multiple vCPU’s and large amounts of RAM (over 4GB).

I’ve checked over a few of my clients’ infrastructures using Luc Dekens’ PowerCLI script which he’s posted here. They all show the same thing; at their current usage levels they’d all be within their vRAM pool limit. These sites were all designed and configured by me, using my usual design techniques and following best practice as much as possible, and if other people are following similar principles, they’ll hopefully have similar results.

In summary, I like the change. It allows for a new generation of CPU and RAM technology, and set’s a licensing model that can be maintained in the future. I can see it being confusing to some for the short term, but once it’s mainstream, and being designed, installed and configured regularly, people should come to see, that it is indeed the right way forward.