Posts Tagged ‘vSphere 5’


Wow, first blog article for quite some time! Time has been stretched over the last year or so with family and new work commitments, so something had to slip! So, hopefully this is the start of me finding more time to blog! I’ve been working on plenty of scripts and other bits and pieces that’ll make some good articles, so fingers crossed they’ll be blogged soon!

I’ve been delving more and more into the world of performance monitoring with relation to VMware vSphere, and CPU Ready times has always been a topic of heated conversation at work… people over commit CPU resource as if it’s free, but don’t realise the consequences.

To prove a point I’ve made an example of an Exchange server. It runs for a business of about 20 users, running Exchange 2010. They also use Lync, and SharePoint, so there’s some integration going on too. It’s a fairly busy machine, and was configured with 4 virtual CPU’s, and a load of RAM (12GB). I’d argued the configuration of machines like this for some time, trying to explain that more CPU’s may mean fewer CPU time for the VM, but it was falling on deaf ears, so, I decided it was time to make a change, and prove a point :)

Now, for a very simple overview…

In case you don’t know how CPU scheduling works, regardless of the number of vCPU’s granted, or their workloads, ALL vCPU’s must be scheduled to run on pCPU’s at the same time, even if the vCPU would be idle. So, if you have 4 pCPU’s, and 3 VM’s with a single pCPU, all is OK, each virtual machine can always get CPU resource, as there will always be 3 CPU’s available. Add in a virtual machine with 2 vCPU’s, and immediately you’d need 5 pCPU’s for all machines to always get pCPU time. Luckily, the VMware scheduler will deal with this and queue pCPU requests. As our new machine will always need time on 2 pCPU’s, it’s “easier” for VMware to schedule pCOU time to the VM’s with 1 vCPU, so they’ll end up getting more CPU time than the 2 vCPU VM. This waiting time, is what’s known as CPU Ready time, and when this get’s too high, you’ll find your VM’s with more vCPU’s will get slower…

Here’s an example:

This is the previously mentioned Exchange server, with 4 vCPU’s. It’s a one hour capture of both CPU Usage, and CPU Ready time:

EX02 4 vCPU

As you can see, CPU ready time was anywhere between 180ms and 1455ms, averaging 565ms. This lead to slow CPU response for the machine.

So, looking at the average CPU usage for a couple of months, it was at ~30%. So that’s 30% of 4 CPU’s.. just over a single CPU. So, 2 vCPU’s needed to be removed… and this is the result:

EX02 with 2 vCPU

So, the result? CPU ready time was between 28ms and 578ms, a vast improvement, and averaged just 86ms, far better than 565ms! CPU usage was higher, but then it’s now using more of the CPU’s it’s granted, so this was to be expected.

Now, CPU Ready time on this machine still isn’t great, but I’ve a lot more VM’s to sort through, reducing vCPU allocation, and hopefully it’ll just get better!


Jun 25

I had the need to automate moving about 50 ISO files from one datastore to another during a storage array migration a short while ago, so I wanted to share this script with you all in case you ever find the need for this or similar.

It’s rather simple, and you just need to edit this with the names of your datastores and folder structure (top folder only):

#Set's Old Datastore
$oldds = get-datastore "Old Datastore Name"

#Set's New Datastore
$newds = get-datastore "New Datastore Name"

#Set's ISO Folder Location
$ISOloc = "Subfolder_Name\"

#Map Drives
new-psdrive -Location $oldds -Name olddrive -PSProvider VimDatastore -Root "\"
new-psdrive -Location $newds -Name newdrive -PSProvider VimDatastore -Root "\"
#Copies Files from Old to New
copy-datastoreitem -recurse -item olddrive:\$ISOloc* newdrive:\$ISOloc

Line 1: Change the script to have the name of the datastore you are moving the files FROM.
Line 5: Change the script to have the name of the datastore you are moving the files TO.
Line 8: Change the script to have the name of your ISO subdirectory. Do not remove the “\” unless you have no subfolder.
Lines 11 & 12: Maps PowerShell drives to those datastores.
Line 14: Copies the files.


Jul 13

After yesterday’s announcement on vSphere 5 there’s been a lot of controversy regarding the changes to the licensing program, some of which is confusing, and some actually makes a lot more sense with current hardware availability in modern server architecture. People have been claiming that VMware are simply “taxing” you for memory usage, but I thought I’d better have my two pence worth, and try to clear things up a little.

VMware have release this to help people get around the new versions and licensing model – I’d take a read of this if I were you!

VMware vSphere 5.0: Licensing, Pricing and Packaging

Firstly, I have to commend VMware on what they’ve done with vSphere 5 licensing. From a design perspective, it was becoming difficult to purchase servers with CPU’s that would fit the vSphere 4 licensing model, and not have too many cores per socket, as Intel and AMD have released some fantastic new microprocessors in recent times. So, they’ve seen that they needed to change this model, and they have. Well done.

Now, the confusing part of the new license model is the talk of “vRAM”. vRAM is essentially the amount of virtual RAM assigned to your (powered on) VM’s. With vSphere 5 licensing you now have what’s called your “vRAM entitlement” or “vRAM Capacity”. This is a very simple concept. For each CPU license of vSphere you buy, you are entitled to a certain maximum amount of vRAM. This varies depending on which version of vSphere you purchase. For example, Enterprise Plus has 48GB per CPU license.

Now, “Used vRAM” is calculated by adding together all of the RAM from all of your “Powered On” VM’s. So, it doesn’t include vRAM assigned to templates, or that assigned to powered off, or paused VM’s.

Simple? I thought so to.

OK, so used vRAM and vRAM capacity seems easy enough, there’s a few notes here though for when you start to cluster your servers…

vCenter will “pool” all vRAM entitlements per license type. So, you can’t mix and match licenses within a pool. It’s all Enterprise Plus, or all Enterprise etc.

Linked vCenter instances will create a single giant pool.

vRAM doesn’t have to be used on a particular host. For example, if I have 2 physical hosts with 2 CPU’s each, both with 128GB RAM and Enterprise Plus licenses (4 CPU licenses) I have 192GB vRAM entitlement, but 256GB physical RAM. OK, so you’re thinking you can’t use 64GB RAM, right? Well, officially, no. BUT, as I’m sure everyone does when building VMware/virtualized environment, we plan for failure…

Let’s say you have those two hosts above. All is running OK, and one day, a server goes offline for some catastrophic reason… great… that’s all you need. Now, HA will start up all your VM’s on the remaining host. Great! Hardly any downtime! On the plus side, you’re also still completely legitimately licensed. You’re vRAM pool is 192GB. Whether this is used across 4 hosts with single CPU’s in each, or 2 hosts with dual CPU’s, it doesn’t matter, it’s part of the pool. It’s not 48GB per CPU per server, set in stone, and can’t be moved.

Also, let’s look at this another way. We all know that the main killers of virtualization are storage, networking, CPU and memory. We all know that too many VM’s per CPU core can cause CPU contention, and slow down performance – not good. So, even with Enterprise licenses, I’ve got a 32GB vRAM entitlement per CPU, ok, so, same as above, 2 CPU’s per server, 2 server (4CPU’s). That’s 128GB of vRAM in the pool available for you to use.

If I create a bunch of VM’s with 2GB RAM each, that’s 64 VM’s maximum. In vSphere 4 with Enterprise edition, you can have 6 core’s per CPU. So, let’s say you have quad core CPU’s. That’s 16 cores in the 2 physical servers, or, that’s 4 vCPU’s per pCPU Core. Sounds good, and it’s going to perform pretty well.

Let’s say that each of those 64 VM’s has 2 vCPU’s. That means I’m now at 8vCPU’s per pCPU core. Not as good, but still going to perform to an average standard probably. But as this, and memory requirements change, things are going to be very different.

As Virtual-SMP has been increased, this is going to get higher and higher, and whilst OS and application requirements increase over time, we’ll need to add more vCPU’s to these units. As such, CPU contention will become an issue, and you’ll have 2 choices… scale-out, or scale-up. With the new licensing model, scale-up is now more limited, at least it is if you can’t add more physical CPU’s to your host. This isn’t necessarily a bad thing though. It provides more fail over capacity, requires less wasted memory across the cluster, and gives DRS more options when it comes to migrating VM’s, all increasing value and performance of your machines.

I’m sure that there are some instances where the maths will work out that the new licensing model doesn’t work out well, but at the moment, the only way I can get to this is if you have lots of “large” VM’s using multiple vCPU’s and large amounts of RAM (over 4GB).

I’ve checked over a few of my clients’ infrastructures using Luc Dekens’ PowerCLI script which he’s posted here. They all show the same thing; at their current usage levels they’d all be within their vRAM pool limit. These sites were all designed and configured by me, using my usual design techniques and following best practice as much as possible, and if other people are following similar principles, they’ll hopefully have similar results.

In summary, I like the change. It allows for a new generation of CPU and RAM technology, and set’s a licensing model that can be maintained in the future. I can see it being confusing to some for the short term, but once it’s mainstream, and being designed, installed and configured regularly, people should come to see, that it is indeed the right way forward.


%d bloggers like this: