My 1st VMUG experience!

Today I actually managed to get to my first local VMUG meeting (London VMUG). I’d heard some great things about these events and today lived up to my expectations.

There were several vendor led presentations in the morning, from the likes of Arista, Embotics and Vision Solutions, each presenting their product and giving us demo’s as to how they work and fit into Cloud/Virtualisation environments.

First up was Arista, a networking solutions company, with a showcase of their switches and networking infrastructure equipment. With some very impressive technology to look deeper into the virtualised side of the network layer using their application: VM Tracer. This is some impressive kit. It’ll even automatically create VLANs on switch ports when VMware DRS starts moving VM’s to ensure networking isn’t compromised at the remote end. They’ve even got an open source, linux kernel running the switch as a “server” rather than a traditional switch. Definately one to look into when next deploying a large scale VM infrastucture…

Second to the stand was Embotics, a provider of a private cloud management application called V-Commander. This too was a very impressive. With a self service portal, change tracking and lifecycle management included I must say I was very impressed. On top of all of this, the interface was web based, extremely slick, and really did stand out as a very polished and refined product. This even has an option for “expiry” of VM’s, forcing the user to request continued access to the VM, and has cost/chargeback included. Highly impressive, and I made sure to have a chat with them and get a USB stick with a demo install pre-loaded so I can take a deeper look for myself.

After a quick break it was over to Vision Solutions for their Double-Take Availability product. I had some pre-conceptions about this product as I’ve used Double-Take applications in the past, and I wasn’t that impressed with them, but this is a replication product that copies machines with the aid of a “helper” VM to a secondary destination and does seem to be a lot better from the version I used (which to be honest was about 4-5 years ago). It can also perform all sorts of migrations (P2V, V2V & V2P) to aid in virtualization migration projects.  Although the interface wasn’t all that great, it was a vast improvement from the consoles I remember seeing, and this product may well be of use for migrations and for geographically diverse replication requirements. It can perform continuous replication, and can also have it’s bandwidth restricted in order to deal with slow WAN links, at the sacrifice of continual replication. Still, looks like a good product, though the interface needs some work, and I really don’t understand why it isn’t web based yet.

After a nice lunch break, with food provided by the VMUG team it was on to presentations from fellow vGeeks. There were two tracks to choose from, though I admit I was skipping between the two.

The first presentation I attended was an update with the new features of vSphere 5. Some VERY impressive changes are on their way, including VMFS-5 allowing larger than 2TB datastores (though VM’s are still limited to 2TB disks for now) and vSphere 5 introduces a pre-built Linux based vCenter appliance, making deployment standard. The “traditional” vCenter service is still available, and the appliance will only support Oracle for an external database source, but ships with an internal PostgreSQL database capable of managing several hosts and a few hundred VM’s. Also introduced is the new Web Client, primarily created for managing VM’s. It’s got a cut down feature set from the full vSphere client application, but should do for performing basic tasks.
Another good release is the vSphere Storage Appliance… I’m really interested in seeing this in action. It’s going to take local storage in each ESXi host, and allow you to use it as shared storage, so you don’t need an expensive SAN solution in place. It’ll also replicate this data accross two ESXi hosts so that you have redundancy and can easily perform maintenance on hosts without affecting VM’s. It sounds great, and it’ll certainly help SMB’s enter the virtualisation space, opening more opportunites for resellers.
There are a lot more changes in vSphere 5 that I won’t delve into here, but I will mention that you can have a VM with 1TB of RAM now… just bear in mind how many CPU licenses you’ll need to run it based on the new vRAM based licensing model…!!!

The second presentation I skipped in favour of taking a look at vCenter Operations Manager. This is in essence a monitoring tool for VMware environments, licensed on a per-VM basis. It’ll monitor hosts as well as VM’s and provide root-cause diagnosis to show you exactly where the problem in your environment lies. Unfortunately, due to issues with my laptop I spent much of the lab trying to get the View client installed, and didn’t manage to get a decent look at this, though from what I was shown it does look like an awesome product, with a comprehensive yet intuitive interface. I’ll have to look at this in further detail when I get 5 minutes as I think it could be really useful for my client base.

The final presentation was discussing PowerCLI and helping you to complete tasks sooner using automation. This was held by one of the authors of the PowerCLI Reference book, Jonathan Medd. Having only ever spoken to Jonathan over Twitter (which began when I won the first PowerCLI Book competition), it was great to finally meet him. He’s helped me several times with some PowerCLI script issues, so it was also good to be able to thank him in person. His presentation showed how to create powershell functions, and then how to create modules filled with them. This all made sense to me… having written plenty of PowerCLI and PowerShell scripts using the same code in many of them, using functions suddenly made sense, as I could then just call these directly in. Adding them all into a module file, meant it’s even easier to gain access to multiple functions, just by importing one module, saving more time and code per script. He also showed some basic help points for those not too familiar including the “get-help” cmdlet that will give you the comment based help from any cmdlet in PowerShell, including the “-examples” switch which simply outputs example uses of a cmdlet. Overall, a great presentation, filled with laughs, and one, now very famous quote on Twitter at least “If you can pee, then you can Powershell!”.

Following the day with a final 10 minute discussion about VMware’s new licensing model, it does seem like the community is split in two… some people are OK, and don’t have a problem when it comes to their upgrade, but others are going to need a massive increase in CPU licenses just to cover systems that they already own, let alone any future expansion. Whether VMware will change the licensing model before final release to smooth out these issues, or whether they’ll force customers wanting to upgrade to purchase additional licenses remains to be seen.

And of course… after all that was a trip to the local pub for some vBeers, wish was very much enjoyed by all!

Overall, a great experience, and I really hope to make more of these sessions in the future. They’re well worth while attending if you use VMware products, finding complimentary products and extending your knowledge, and it’s a fab networking opportunity to boot.

Finally, I want to thank the LonVMUG committee again for organising the events from today. If it wasn’t for these volunteers, and of course the vendors, these events simply wouldn’t happen, and it’s fantastic to see a community making such an effort to help each other and promote a product that we all love. I’ll be trying to get some “odd” pictures of the #LonVMUG beer mats soon 🙂

Preparing for VMware vCenter Database Disaster – Part Two: Export Host information via PowerCLI

Hopefully you’ve all read Part One of this series, where I provide examples of gathering information from vCenter mainly for VM’s in order to recreate your environment from scratch, just in case you have a major vCenter database corruption or the like. If you have, sorry part two has taken so long!
Part Two will show how to export information regarding your ESX(i) hosts, including networking information, so that this part of your setup is also easy to recreate. I should note here, that I’ll be trying to export VSS information, as well as Service Console and VM Kernel port configuration, and get this all exported into CSV files.
So… Here goes…!
Exporting physcial NIC info for the vDS switch
This is a pretty simple script that uses the get-vmhostpnic function from the Distributed Switch module in I mentioned in part one (Thanks again Luc Dekens :¬)).
import-module distributedswitch

write-host "Getting vDS pNIC Info"

$vdshostfilename = "C:\vdshostinfo.csv"
$pnics = get-cluster "<em>ClusterName</em>" | get-vmhost | get-vmhostpnic
foreach ($pnic in $pnics) {
if ($pnic.Switch -eq "<em>dVS-Name</em>") {
$strpnic = $strpnic + $pnic.pnic + "," + $pnic.VMhost + "," + $pnic.Switch + "`n"
}
}
#Writes to CSV file
out-file -filepath $vdshostfilename -inputobject $strpnic -encoding ASCII

Simply change “ClusterName” to match that of your cluster, and change “dVS-Name” to match that of your dVS (vDS – whichever). Then the info exported will contain the physical nic info for your distributed switch.

Next it’s time for simply getting a list of hosts in the cluster, I know, it’s nothing major, but at least it’s in a CSV I can import later, and it makes life much easier!!!

$cluster="ClusterName"
$hostfilename = "c:\filename.csv"
write-host "Getting Host List"
$hosts = get-cluster $cluster | get-vmhost
foreach ($vmhost in $hosts) {
$outhost = $outhost + $vmhost.Name + "`n"
}

out-file -filepath $hostfilename -inputobject $outhost -encoding ASCII

Simply put, gather a list of hosts in the cluster called “ClusterName” and output their names to “c:\filename.csv”

OK, so now that we have that info, all I need to gather is a list of Standard Switches and their port groups, including IP information to make life easy… So, here goes:

$vssoutfile = "vssoutfile.csv"
$cluster = "Cluster Name"
$vmhosts = get-cluster $cluster | get-vmhost

$vssout = "Host Name, VSS Name, VSS Pnic, VSS PG" + "`n"
foreach ($vmhost in $vmhosts) {
$vmhostname = $vmhost.name
$switches = get-virtualswitch $vmhost
foreach ($switch in $switches) {
$vssname = $switch.name
$Nic = $switch.nic
$pgs = get-virtualportgroup -virtualswitch $switch
foreach ($pg in $pgs) {
$pgname = $pg.name
$vssout = $vssout + "$vmhostname" + "," + `
        "$vssname" + "," + "$Nic" + "," + `
        "$pgName" + "`n"
}
}
}

out-file -filepath $vssoutfile -inputobject $vssout -encoding ASCII
Now we just need the host IP’s. At the moment, I can find this info for VM Kernel ports on ESX hosts, but I can get service console information, and the vmkernel IP in ESXi hosts (it’s pulled from the same PowerCLI script, so that’s this one here:

$hostipoutfile = "hostip.csv"
$cluster = "Cluster Name"
$output = "Host Name" + "," + "IP Addresses" + "`n"

$vmhosts = get-cluster $cluster | get-vmhost
foreach ($vmhost in $vmhosts) {
$vmhostname = $vmhost.name
$ips = Get-VMHost $vmhostname | `
     Select @{N="ConsoleIP";E={(Get-VMHostNetwork $_).VirtualNic | `
     ForEach{$_.IP}}}
$ipaddrs = $ips.ConsoleIP
$output = $output + "$vmhostname" + "," + "$ipaddrs" + "`n"
}

out-file -filepath $hostipoutfile -inputobject $output -encoding ASCII

Now, I’m slowly working on this project in my spare time at work (it’s actually for work but not as important as everything else I’m doing!), so part 3 is probably going to be some time away, and that’ll show you how to import all this info back into vCenter to reconfigure your hosts… bear with me, I’ll get this written 🙂

VMware vSphere 5 Licensing: My Thoughts

After yesterday’s announcement on vSphere 5 there’s been a lot of controversy regarding the changes to the licensing program, some of which is confusing, and some actually makes a lot more sense with current hardware availability in modern server architecture. People have been claiming that VMware are simply “taxing” you for memory usage, but I thought I’d better have my two pence worth, and try to clear things up a little.

VMware have release this to help people get around the new versions and licensing model – I’d take a read of this if I were you!

VMware vSphere 5.0: Licensing, Pricing and Packaging

Firstly, I have to commend VMware on what they’ve done with vSphere 5 licensing. From a design perspective, it was becoming difficult to purchase servers with CPU’s that would fit the vSphere 4 licensing model, and not have too many cores per socket, as Intel and AMD have released some fantastic new microprocessors in recent times. So, they’ve seen that they needed to change this model, and they have. Well done.

Now, the confusing part of the new license model is the talk of “vRAM”. vRAM is essentially the amount of virtual RAM assigned to your (powered on) VM’s. With vSphere 5 licensing you now have what’s called your “vRAM entitlement” or “vRAM Capacity”. This is a very simple concept. For each CPU license of vSphere you buy, you are entitled to a certain maximum amount of vRAM. This varies depending on which version of vSphere you purchase. For example, Enterprise Plus has 48GB per CPU license.

Now, “Used vRAM” is calculated by adding together all of the RAM from all of your “Powered On” VM’s. So, it doesn’t include vRAM assigned to templates, or that assigned to powered off, or paused VM’s.

Simple? I thought so to.

OK, so used vRAM and vRAM capacity seems easy enough, there’s a few notes here though for when you start to cluster your servers…

vCenter will “pool” all vRAM entitlements per license type. So, you can’t mix and match licenses within a pool. It’s all Enterprise Plus, or all Enterprise etc.

Linked vCenter instances will create a single giant pool.

vRAM doesn’t have to be used on a particular host. For example, if I have 2 physical hosts with 2 CPU’s each, both with 128GB RAM and Enterprise Plus licenses (4 CPU licenses) I have 192GB vRAM entitlement, but 256GB physical RAM. OK, so you’re thinking you can’t use 64GB RAM, right? Well, officially, no. BUT, as I’m sure everyone does when building VMware/virtualized environment, we plan for failure…

Let’s say you have those two hosts above. All is running OK, and one day, a server goes offline for some catastrophic reason… great… that’s all you need. Now, HA will start up all your VM’s on the remaining host. Great! Hardly any downtime! On the plus side, you’re also still completely legitimately licensed. You’re vRAM pool is 192GB. Whether this is used across 4 hosts with single CPU’s in each, or 2 hosts with dual CPU’s, it doesn’t matter, it’s part of the pool. It’s not 48GB per CPU per server, set in stone, and can’t be moved.

Also, let’s look at this another way. We all know that the main killers of virtualization are storage, networking, CPU and memory. We all know that too many VM’s per CPU core can cause CPU contention, and slow down performance – not good. So, even with Enterprise licenses, I’ve got a 32GB vRAM entitlement per CPU, ok, so, same as above, 2 CPU’s per server, 2 server (4CPU’s). That’s 128GB of vRAM in the pool available for you to use.

If I create a bunch of VM’s with 2GB RAM each, that’s 64 VM’s maximum. In vSphere 4 with Enterprise edition, you can have 6 core’s per CPU. So, let’s say you have quad core CPU’s. That’s 16 cores in the 2 physical servers, or, that’s 4 vCPU’s per pCPU Core. Sounds good, and it’s going to perform pretty well.

Let’s say that each of those 64 VM’s has 2 vCPU’s. That means I’m now at 8vCPU’s per pCPU core. Not as good, but still going to perform to an average standard probably. But as this, and memory requirements change, things are going to be very different.

As Virtual-SMP has been increased, this is going to get higher and higher, and whilst OS and application requirements increase over time, we’ll need to add more vCPU’s to these units. As such, CPU contention will become an issue, and you’ll have 2 choices… scale-out, or scale-up. With the new licensing model, scale-up is now more limited, at least it is if you can’t add more physical CPU’s to your host. This isn’t necessarily a bad thing though. It provides more fail over capacity, requires less wasted memory across the cluster, and gives DRS more options when it comes to migrating VM’s, all increasing value and performance of your machines.

I’m sure that there are some instances where the maths will work out that the new licensing model doesn’t work out well, but at the moment, the only way I can get to this is if you have lots of “large” VM’s using multiple vCPU’s and large amounts of RAM (over 4GB).

I’ve checked over a few of my clients’ infrastructures using Luc Dekens’ PowerCLI script which he’s posted here. They all show the same thing; at their current usage levels they’d all be within their vRAM pool limit. These sites were all designed and configured by me, using my usual design techniques and following best practice as much as possible, and if other people are following similar principles, they’ll hopefully have similar results.

In summary, I like the change. It allows for a new generation of CPU and RAM technology, and set’s a licensing model that can be maintained in the future. I can see it being confusing to some for the short term, but once it’s mainstream, and being designed, installed and configured regularly, people should come to see, that it is indeed the right way forward.