Posts Tagged ‘vSphere’


Wow, first blog article for quite some time! Time has been stretched over the last year or so with family and new work commitments, so something had to slip! So, hopefully this is the start of me finding more time to blog! I’ve been working on plenty of scripts and other bits and pieces that’ll make some good articles, so fingers crossed they’ll be blogged soon!

I’ve been delving more and more into the world of performance monitoring with relation to VMware vSphere, and CPU Ready times has always been a topic of heated conversation at work… people over commit CPU resource as if it’s free, but don’t realise the consequences.

To prove a point I’ve made an example of an Exchange server. It runs for a business of about 20 users, running Exchange 2010. They also use Lync, and SharePoint, so there’s some integration going on too. It’s a fairly busy machine, and was configured with 4 virtual CPU’s, and a load of RAM (12GB). I’d argued the configuration of machines like this for some time, trying to explain that more CPU’s may mean fewer CPU time for the VM, but it was falling on deaf ears, so, I decided it was time to make a change, and prove a point :)

Now, for a very simple overview…

In case you don’t know how CPU scheduling works, regardless of the number of vCPU’s granted, or their workloads, ALL vCPU’s must be scheduled to run on pCPU’s at the same time, even if the vCPU would be idle. So, if you have 4 pCPU’s, and 3 VM’s with a single pCPU, all is OK, each virtual machine can always get CPU resource, as there will always be 3 CPU’s available. Add in a virtual machine with 2 vCPU’s, and immediately you’d need 5 pCPU’s for all machines to always get pCPU time. Luckily, the VMware scheduler will deal with this and queue pCPU requests. As our new machine will always need time on 2 pCPU’s, it’s “easier” for VMware to schedule pCOU time to the VM’s with 1 vCPU, so they’ll end up getting more CPU time than the 2 vCPU VM. This waiting time, is what’s known as CPU Ready time, and when this get’s too high, you’ll find your VM’s with more vCPU’s will get slower…

Here’s an example:

This is the previously mentioned Exchange server, with 4 vCPU’s. It’s a one hour capture of both CPU Usage, and CPU Ready time:

EX02 4 vCPU

As you can see, CPU ready time was anywhere between 180ms and 1455ms, averaging 565ms. This lead to slow CPU response for the machine.

So, looking at the average CPU usage for a couple of months, it was at ~30%. So that’s 30% of 4 CPU’s.. just over a single CPU. So, 2 vCPU’s needed to be removed… and this is the result:

EX02 with 2 vCPU

So, the result? CPU ready time was between 28ms and 578ms, a vast improvement, and averaged just 86ms, far better than 565ms! CPU usage was higher, but then it’s now using more of the CPU’s it’s granted, so this was to be expected.

Now, CPU Ready time on this machine still isn’t great, but I’ve a lot more VM’s to sort through, reducing vCPU allocation, and hopefully it’ll just get better!


Jun 25

I had the need to automate moving about 50 ISO files from one datastore to another during a storage array migration a short while ago, so I wanted to share this script with you all in case you ever find the need for this or similar.

It’s rather simple, and you just need to edit this with the names of your datastores and folder structure (top folder only):

#Set's Old Datastore
$oldds = get-datastore "Old Datastore Name"

#Set's New Datastore
$newds = get-datastore "New Datastore Name"

#Set's ISO Folder Location
$ISOloc = "Subfolder_Name\"

#Map Drives
new-psdrive -Location $oldds -Name olddrive -PSProvider VimDatastore -Root "\"
new-psdrive -Location $newds -Name newdrive -PSProvider VimDatastore -Root "\"
#Copies Files from Old to New
copy-datastoreitem -recurse -item olddrive:\$ISOloc* newdrive:\$ISOloc

Line 1: Change the script to have the name of the datastore you are moving the files FROM.
Line 5: Change the script to have the name of the datastore you are moving the files TO.
Line 8: Change the script to have the name of your ISO subdirectory. Do not remove the “\” unless you have no subfolder.
Lines 11 & 12: Maps PowerShell drives to those datastores.
Line 14: Copies the files.



Full Error:

File <unspecified filename> is larger than the maximum size supported by datastore ‘<unspecified datastore>

I’ve been coming up against this issue for the last few days whilst installing some backup software for one of our customers. It’s highly frustrating and I couldn’t figure out why this was even happening. The data stores that this particular VM was running on had plenty of free disk space, and none of the disks exceeded the file size maximum for the block size on those disks.

What I didn’t know was, quite simply, that a VM cannot snapshot if it’s configuration file is stored on a data store with a smaller block size than one of it’s virtual hard disks. Now, I presume, that this is only the case if the virtual disk size is larger than the supported size of a file on the configuration files data store.

So, if you come accross this problem, just storage vMotion the configuration file to a data store with a larger block size, or at least to a datastore with the same size block size as your largest virtual disks’ data store. Run another snapshot, and “Hey Presto!” it should snapshot correctly.


vSphere5 goes GA!!

posted by Dan Hayward
Aug 25

At last! After being announced last month, vSphere5 has finally gone GA! Available to download from late last night (UK time) the latest release of VMware’s Datacentre hypervisor comes with over 140 new features, and improves on the existing features from vSphere4.

If you already own vSphere 4 and have a current support agreement, then your licenses will be available to upgrade by the end of the week. Given that a lot of people will probably wait a couple of months before upgrading any production systems, I don’t see this being a problem for most people, and there’s the usual evaluation period in the mean time anyway.

I’ve downloaded my copy, have you got yours? If not, head over here:

http://downloads.vmware.com/d/info/datacenter_cloud_infrastructure/vmware_vsphere/5_0



Today VMware announced that there were going to be changes made to the new licensing model that was announced for the latest edition of their Hypervisor platform, vSphere 5.

All editions of vSphere 5 are to have larger vRAM entitlements than originally stated, with Enterprise Plus now getting 96GB vRAM per CPU license as well as other editions having their vRAM entitlements increased. Large VM’s will be capped at an entitlement maximum of 96GB (even if your VM has 1TB of RAM). This won’t be available at GA, but will be afterwards, with another tool being created so that users can keep track of vRAM entitlement useage easily in the meantime.

More details can be found here:

http://blogs.vmware.com/rethinkit/

I have to say, that I think what VMware has done here is amazing. They’ve realised they needed to change the licensing model, made a choice, and listened to customer feedback. And after all that, they changed the model so that existing users, and new customers, can take more advantage of their hypervisor based on current trends for VM memory sizing. It’s not often that you see this kind of dedication to customers and keeping them happy, especially from large companies. Hats off to you VMware.

UPDATE:

Also just found out that the ESXi vRAM limit is being increased from 8GB to 32GB – much better!



Hopefully you’ve all read Part One of this series, where I provide examples of gathering information from vCenter mainly for VM’s in order to recreate your environment from scratch, just in case you have a major vCenter database corruption or the like. If you have, sorry part two has taken so long!
Part Two will show how to export information regarding your ESX(i) hosts, including networking information, so that this part of your setup is also easy to recreate. I should note here, that I’ll be trying to export VSS information, as well as Service Console and VM Kernel port configuration, and get this all exported into CSV files.
So… Here goes…!
Exporting physcial NIC info for the vDS switch
This is a pretty simple script that uses the get-vmhostpnic function from the Distributed Switch module in I mentioned in part one (Thanks again Luc Dekens :¬)).
import-module distributedswitch

write-host "Getting vDS pNIC Info"

$vdshostfilename = "C:\vdshostinfo.csv"
$pnics = get-cluster "<em>ClusterName</em>" | get-vmhost | get-vmhostpnic
foreach ($pnic in $pnics) {
if ($pnic.Switch -eq "<em>dVS-Name</em>") {
$strpnic = $strpnic + $pnic.pnic + "," + $pnic.VMhost + "," + $pnic.Switch + "`n"
}
}
#Writes to CSV file
out-file -filepath $vdshostfilename -inputobject $strpnic -encoding ASCII

Simply change “ClusterName” to match that of your cluster, and change “dVS-Name” to match that of your dVS (vDS – whichever). Then the info exported will contain the physical nic info for your distributed switch.

Next it’s time for simply getting a list of hosts in the cluster, I know, it’s nothing major, but at least it’s in a CSV I can import later, and it makes life much easier!!!

$cluster="ClusterName"
$hostfilename = "c:\filename.csv"
write-host "Getting Host List"
$hosts = get-cluster $cluster | get-vmhost
foreach ($vmhost in $hosts) {
$outhost = $outhost + $vmhost.Name + "`n"
}

out-file -filepath $hostfilename -inputobject $outhost -encoding ASCII

Simply put, gather a list of hosts in the cluster called “ClusterName” and output their names to “c:\filename.csv”

OK, so now that we have that info, all I need to gather is a list of Standard Switches and their port groups, including IP information to make life easy… So, here goes:

$vssoutfile = "vssoutfile.csv"
$cluster = "Cluster Name"
$vmhosts = get-cluster $cluster | get-vmhost

$vssout = "Host Name, VSS Name, VSS Pnic, VSS PG" + "`n"
foreach ($vmhost in $vmhosts) {
$vmhostname = $vmhost.name
$switches = get-virtualswitch $vmhost
foreach ($switch in $switches) {
$vssname = $switch.name
$Nic = $switch.nic
$pgs = get-virtualportgroup -virtualswitch $switch
foreach ($pg in $pgs) {
$pgname = $pg.name
$vssout = $vssout + "$vmhostname" + "," + `
        "$vssname" + "," + "$Nic" + "," + `
        "$pgName" + "`n"
}
}
}

out-file -filepath $vssoutfile -inputobject $vssout -encoding ASCII
Now we just need the host IP’s. At the moment, I can find this info for VM Kernel ports on ESX hosts, but I can get service console information, and the vmkernel IP in ESXi hosts (it’s pulled from the same PowerCLI script, so that’s this one here:

$hostipoutfile = "hostip.csv"
$cluster = "Cluster Name"
$output = "Host Name" + "," + "IP Addresses" + "`n"

$vmhosts = get-cluster $cluster | get-vmhost
foreach ($vmhost in $vmhosts) {
$vmhostname = $vmhost.name
$ips = Get-VMHost $vmhostname | `
     Select @{N="ConsoleIP";E={(Get-VMHostNetwork $_).VirtualNic | `
     ForEach{$_.IP}}}
$ipaddrs = $ips.ConsoleIP
$output = $output + "$vmhostname" + "," + "$ipaddrs" + "`n"
}

out-file -filepath $hostipoutfile -inputobject $output -encoding ASCII

Now, I’m slowly working on this project in my spare time at work (it’s actually for work but not as important as everything else I’m doing!), so part 3 is probably going to be some time away, and that’ll show you how to import all this info back into vCenter to reconfigure your hosts… bear with me, I’ll get this written :)


Mar 15

Over the last week or so I’ve been trying to get a small VMware lab environment setup so that we could do some testing in-house as we’ve been needing some kit to test Exchange upgrades and the like before completing these actions on both our own network and our customer’s environments. This also gave me the opportunity to play around with iSCSI and the software iSCSI initiator in VMware ESXi.

Although it’s not 100% complete yet, I thought I’d share what I’ve done so far.

Kit List:

2 x Gbps Network Switches
2 x Servers with 64-bit processors and about 8GB RAM, each with 2 NICs. (I used IBM x3650′s)
2 x 8GB USB sticks
1 x “Server” with some local storage (I used a 1TB SATA hard disk and an 80GB SATA Hard Disk). This server should have 2 NICs. This could be a PC and doesn’t need to be highly spec’d.

So… once I had the kit together I went and did the following:

  1. I got it all racked up and connected one NIC in each of the three servers to each switch. One for iSCSI storage, and the other for production.
  2. I then installed the USB Sticks into each of the ESXi hosts.
  3. After downloading ESXi 4.1U1 as an ISO and burning it to a CD, I then installed it onto each of the USB sticks in the usual manner, making sure that USB was my primary boot option in the BIOS too. I also set the IP address for the data side of the ESXi networking here (VM Kernel port) so that I can start to configure these using the vSphere client.

iSCSI Setup

  1. I then had the task of setting up the third server as an iSCSI storage appliance. I’ll explain why I did this on a physical host later (rather than as a VM). So, I installed Debian 5.0 and made sure I didn’t install the “Desktop Environment” (what’s the point in having a GUI on an appliance? It’s just a waste of CPU and RAM resources).
  2. The IP addresses were then set (one on the iSCSI network and the other on the data network). You can do this in /etc/network/interfaces
  3. Then came the “difficult” bit… setting up iSCSI which I’d never done before, let alone on Debian. Firstly, I had to go and download the iSCSI apps from the Debian repositories:apt-get install iscsi-target iscsi-modules-`uname -r`Note: `uname -r` (with the “`” at each end) replaces itself with the curent kernel version number within the command line i.e. if you were running 2.6.3-444 (that’s a made up kernel edition as far as I know), the command would look like this once the uname command has been taken into account:

    apt-get install iscsi-target iscsi-modules-2.6.3-444

  4. Once that’s downloaded and installed  there are some changes that need to be made to the config files, so edit:/etc/default/iscsitarget

    You can use “nano” to edit the file:

    nano /etc/default/iscsitargetand change the line:

    ISCSITARGET_ENABLE=false

    to:

    ISCSITARGET_ENABLE=true

    If you used “nano” to do this, then type “CTRL+X” followed by “Y” then press enter to save and exit the file.

  5. You then need to get a list of all of your disks. You need to make sure that your disks don’t currently have any partitions on them. To get a list of disks/partitions use the following command:fdisk -l
  6. This will output something like the following:Example "fdisk -l" output
  7. Here, I have disk “/dev/sdc” and “/dev/sdd” both at 1TB with no partitions. The important part to remember here is the path to the disk (the /dev/sdx part).
  8. Once you have this info, you can go ahead and configure the iSCSI Target. This is done by modifing the following file:/etc/ietd.conf

    Using nano again that’s:

    nano /etc/ietd.conf

  9. Now, you need to add sections for a new target, for the new LUNs to present to that target, and if needs be, the CHAP username and password that the initiator will use to connect with. Scroll to the bottom of the file and add the following lines:Target iqn.2011-03.uk.co.spug:ESXi.iSCSI
    IncomingUser Username Password
    Lun 0 Path=/dev/sdd,Type=fileio
    Alias Backup_iSCSI
    MaxConnections 1

    To explain these options a little further:

    The “Target” line is the target name that will appear on the initiator. This name should be unique. The standard is to use the year and month that you created this, and your domain name backwards. After the colon can be pretty much anything, here I’ve chosen to depict that it’s iSCSI storage for the ESXi hosts.
    The second line is the CHAP authentication. Here you specify the username and password that the initiator will provide in order to connect to the LUNS.
    The third line is the LUN itself. This should ALWAYS start at LUN0 as per VMware’s storage guidelines. the “Path” section should contain the path to the physical disk from step 9.
    The Alias is a simple name for the target.
    Max Connections isn’t actually used on this version, but the default setting is 1 (though more than 1 connection can be initiated at a time).

  10. Save that file to accept the changes.
  11. That’s pretty much it from the iSCSI front at the moment… My next task is to see if I can enable Jumbe Framing which would enhance performance of the iSCSI storage, I’m just not sure if the switch and NICs I had lying around are capable at the moment… :-)
  12. Then I tried to figure out how to bind iSCSI to a single NIC. I read lots of artciles stating to add a line called “OPTIONS=”-a=ip.addr.for.binding”" in the /etc/init.d/iscsistarget file underneath the line reading “DAEMON=/usr/sbin/ietd”, but I couldn’t get this to wotk correctly, and so it’s still on my “To Do” list. It’s either this, or set the allowed initiators for each target in the “/etc/initiators.allow” file to segregate it off that way, but it’s just not as “clean”!

Setting up iSCSI in ESXi:

  1. Log in to your host using the vSphere client.
  2. Go to the “Host Inventory” view.
  3. Click on the host.
  4. Click “Configuration”.
  5. Click “Networking”.
  6. Add a new vSwitch for Management and add the unused NIC. Set the IP address for the VMKernel port on your iSCSI IP range.
  7. Click “Storage Adapters”.
  8. Click “Software iSCSI initiator”.
  9. Click “Properties”.
  10. Click “Configure”.
  11. Enable the iSCSI adapter. This sets the initiators IQN name. Click OK.
  12. Click “Dynamic Discovery”.
  13. Click “Add”.
  14. Type the IP of the iSCSI appliance you built.
  15. If you enabled the incominguser options in /etc/ietd.conf then click CHAP. If you didn’t then skip to step 20.
  16. Uncheck “Inherit from Parent” under the “CHAP” section.
  17. Select “Use CHAP” in the drop down.
  18. Type the username and password you entered in /etc/ietd.conf.
  19. Click OK.
  20. Click OK.
  21. Click Close.
  22. You should be presented with an option to rescan the device. Accept this and ESXi will rescan the iSCSI initiator for LUNs.
  23. Add the newly found storage under the “Storage” settings in the usual manner, choosing Disk/LUN on the first screen.
  24. Repeat these actions on your second ESXi host.

Now it’s just a case of setting up a VM to run vCenter and attaching your ESX hosts as usual, all on iSCSI storage.

Now, I went for a physical iSCSI storage appliance, but you could use something like OpenFiler to add shared storage, and this could also be done with a VM in the same way as above. The reason I chose a seperate physical appliance was that I wanted to be able to fully test HA and DRS, and I wouldn’t have that option if my storage was on one of my physical hosts as I wouldn’t be able to turn that host off and simulate power failure and connectivity failures. If you don’t need this, then openfiler, or a purpose built virtual linux appliance, would work perfectly OK and still work for testing purposes.

The above hasn’t given me the greatest performance ever, but then I didn’t really expect that using a single 7.2K rpm SATA disk over iSCSI on a 1Gbps ethernet connection, but, it still gives me the option to test settings, environment changes and let’s me play around with different technologies without potentially damaging any production services.

Next Steps:
Investigate the binding issues
Investigate Jumbo Framing to see if this gives better performance



The week before last I attended the vSphere 4 Design Workshop at QA in Reading and came across something I’ve rarely actually seen in use… vApps. It’s not something that many people pay attention to I don’t think, but in all honesty, they’re pretty awesome when you think about it even for internal use, in fact, the only place I’ve seen them is when downloading pre-built appliances from the marketplace… They’ve certainly made me re-think a few things…

Imagine this:

You have several ESX hosts running a bunch of virtual machines, and for some reason the power fails in the middle of the night and the UPS systems don’t have enough power to last until you get to the office in the morning (I’m talking worst case here basically, and you should have far more protection than that ideally)…

When you come in the next morning (if you haven’t had a call in the middle of the night), and your systems are finally powered on, you’re going to have to boot each virtual machine to restore the network’s functionality, taking the usual route of Domain Controllers first, then mail servers, file servers, print servers so on and so forth until the network is operational again, each one being booted manually, or via some sort of PowerCLI script perhaps? Well, what if you could make that process 30 times easier? Well then, go take a look at vApps…

A vApp for all intents are purposes is a container of one, or more, virtual machines. BUT, what you can do with a vApp is specify boot order of the machines within that vApp… So, for instance, we all know that to boot an Exchange server we need Active Directory and DNS servers to be operational right?

Well… create a vApp, add the Domain Controllers, DNS servers and Exchange Mailbox Server, as well as the Exchange CAS server (just drag and drop them in the vCenter console). Edit the vApp’s settings and you’ll find a tab called “Start Order”. Now, here you’ll find some “Groups” and all of the VM’s you added are probably listed in their own group. Make sure that your VM’s are listed in the correct order (use the up and down arrows), so that Domain Controllers at the top, and the mailbox server at the bottom in this case. Now, if you put two machines in the same group, they’ll boot at the same time, otherwise it’s a top to bottom list (and reverse for shut down). My preference here is to change the settings for each VM so that the next machine will boot once VMware tools has loaded in the VM, so, tick the “VMware Tools are ready” check box. Whilst you’re doing this, set the “Shutdown Action” to “Guest Shutdown”.

That’s it… now that the machines are in a vApp and the start order is set, all you have to do is power on the vApp and it’ll then automatically boot each VM in turn, waiting for either 2 minutes to pass (that’s the default which can be changed) or for VMware Tools to be started by the OS. Simple huh?

Now… I hear you say “But I have power on options for when my hosts boot”… yeah, but… what happens when DRS or manual vMotion is implemented and the VM is moved to another host, oh  yeah, it loses that setting for eternity (or at least until you manually add the rule on the host again)…

Oh… and you can nest vApps too…

Taking the previous example, you may want to segregate Exchange from the Domain Controllers to allow you to easily power on or shut down each type of system separately (for maintenance for example), so just create 3 vApps: one as a “Master”, one for the Domain Controllers, and the third for the Exchange Servers. Populate the latter two with the correct virtual machines, and set the start order and shut down options as before, giving you two vApps that are independent of each other. Now, drag those vApps into the “Master” vApp and set the start order here too, with your DC’s vApp in a group higher than the Exchange Servers vApp. You don’t get the same options here, as the settings from the nested vApps will still apply. You now have an easy method to boot just the domain controllers, just the Exchange Servers, or the whole lot in one click, or shut them down in reverse order too. Nice!

That’s not the only benefit, there’s a couple more…

vApps also give you another security boundary. You can create roles that have access to specific tasks with vApps, so you can give “Power On” rights to a member of the IT Department who may not have any other access, but in an emergency, can still boot specific vApps and therefore boot the VM’s in the correct sequence.

They also have built-in resource pools, so all the usual benefits still apply here too, and yes, you can nest resource pools inside vApps too if you really want or need to!

Now, this does alter the way VM’s appear in the vCenter console, much to my own disappointment in fact. The “Hosts and Clusters” view doesn’t change much, other than the fact that each vApp becomes another level to expand in the console, but, the VM’s and Templates view is changed. Now, in the left hand pane where the VM’s used to reside, you can only see the vApps, and to see which VM’s are in which vApps you have to click on the vApp and then on the “Virtual Machines” tab. Why a vApp in this view doesn’t act as a folder I don’t know, especially when it does in the “Hosts and Clusters” view, which doesn’t usually show folders!!

From a disaster recovery scenario, and from a systems maintenance point of view, I think vApps are fantastic… Being able to boot all of my machines in one click, and also having the option to shut them all down the same way is fantastic, moving servers, or having to shut them down for electrical systems maintenance makes life easier, and that’s the whole idea of virtualization isn’t it?


Sep 14

With the recent(ish) release of vSphere 4.1 comes the task of upgrading ESX/ESXi hosts to this new build, fortunately, with some of the tools available from VMware (VMware vCenter Update Manager) the host upgrade process is fairly simple, and takes approximately half an hour per host (plus some time afterwards for testing).

The latest version of vCenter however is slightly more difficult for some people… The latest edition requires a 64-bit OS, and more RAM than the previous version. Now RAM is easy, but you can’t “upgrade” a 32-bit OS to a 64-bit OS which can make migrating it slightly difficult!

I’ve got this arduous task to come but here’s my proposed plan…

  1. Check your physical host (if using one) complies with the HCL for vcenter: http://www.vmware.com/resources/compatibility/
  2. Use your favourite VM conversion tool to get a copy of your current vCenter server as a virtual machine (if it’s physical that is).
  3. Take a backup of your vCenter (and Update Manager if applicable) databases. Also backup your vCenter server’s SSL certificate information as you’ll need this later if you have your own certificates.
  4. Shut down the vCenter physical host and then boot the VM version.
  5. Check that the VM version is working correctly, and then power it down.
  6. Rebuild your vCenter server with a 64-bit OS such as MS Windows Server 2008 R2. If you need to give it the same name in Active Directory then reset the computer account first in AD so that this machine can join in its place.
  7. Create an ODBC connection to the vCenter DB on your DB server (or install it locally and restore the DB). If you were previously using Update Manager, then create the ODBC connector or restore the DB for this too.
  8. Install vCenter 4.1, as well as vCenter Converter, and vCenter Update Manager.
  9. Copy in the SSL Certificates you backed up earlier.
  10. Open the vCenter client and install the plug-ins for Update Manager and the Converter.
  11. Check that all the hosts can be accessed correctly, if not, right-click the host, choose “Connect” and then follow the wizard.
  12. Once that’s done, check through all your client settings, and the settings for Update Manager to check all is normal.

That should be about it, as I said, I’ll be performing this task shortly (about mid September) so I’ll update this entry if anything does need changing! Next it’s time to upgrade the hosts…..

UPDATE:

Ok, so this wasn’t as easy as I first thought it would be… VUM was a pain, in fact, I ended up creating a new database and scrapping the old one (not too much hassle for me though when my VUM server is sat behind a 1Gbps internet connection…). So, here’s what I did and how I did it, starting with my previous config:

I had a vCenter 4.0U1 server running on Server 2003 Standard x86 with 4GB of RAM. There was also a server running server 2008 R1 x64 running SQL 2005 for the vCenter and VUM databases. All sat on a 1Gbps network.

The perceived final result would be a vCenter server running version 4.1 with Windows Server 2008 R2 Standard x64, connected to the same, unaltered, SQL server as before. This sounded really simple!

So, here are the steps I took to rebuild the vCenter server:

  1. I started by using VMware Converter to convert the vCenter host to a VM so that I knew, should all else fail, I’d have a backup copy.
  2. I then backed up the SQL databases so that I knew they were also protected.
  3. I shut down the vCenter server and booted the VM copy to check that this still worked ok.
  4. Once I was happy that this was alright, I shut down the VM and booted the physical server with the 2008 DVD.
  5. Once Windows was installed I did all the usual bits: Windows Updates, teaming the NICs etc. I then reset the computer account in AD and joined the server to the domain.
  6. I then set about recreating the ODBC connections. This became a little bit of a task… vCenter still requires a 32-bit DSN and having not needed this before, setting up was a challenge… Anyway, you need to run odbcad32.exe rather than the ODBC tool under Administrative Tools. Simply running this from the Run prompt wasn’t working though, so I ended up running it from c:windowssyswow64odbcad32.exe.
  7. I then installed vCenter and the vSphere client, which worked a charm after fixing the ODBC connector. At this point, I wasn’t installing Converter, or Update Manager, I wanted to deal with vCenter first.
  8. If you have your own certificates for vCenter this is when you copy them back in… If you don’t, just reconnect the hosts by right clicking, choosing “Connect” and running through the wizard. Easy. Well, unless (like me) you have a host that won’t join the HA cluster… more on that in another article!
  9. OK, so by this point you can start testing that vCenter is working OK, so I started performing some vMotion operations, and checking through the console that configurations could be changed etc. Once happy that all was OK, I moved on…
  10. So, now I installed Update Manager and VMware Converter.
  11. After this, I logged back into the vSphere client and isntalled the required plug-ins, then restarted the vSphere client.
  12. This is when the problems struck…. VUM wouldn’t work correctly. I couldn’t scan machines, or hosts for updates. I then noticed that the service wasn’t running in an account that had access to the SQL database… so I changed this, still no luck! In the end, I gave up, created a blank database on SQL, altered the ODBC connection to point to this DB and job done, Update Manager worked again. So, then I kicked off a patch definition download (again, not a problem when you’re sat behind a massive internet connection).

In essence, the migration had gone as planned, the server had been rebuilt, and was using the old database like I was hoping it would. Yes, I had some issues with a host not joining HA correctly, and yes I had to rebuild the VUM database (which let’s face it, isn’t the end of the world unless you have a lot of custom baselines). So, where from here? Oh yes… upgrading the hosts… and of course VMware Tools on each and every VM… that’s a time consuming task when you have over 200 VM’s in a single cluster, especially when there’s only 4 ESX hosts!

So… you can probably guess what will be coming up soon on this blog… yep… Upgrading ESX or ESX(i) 4.0 to 4.1…



I came across this one earlier today, and I must say, I was surprised that this option is available to users without administrative rights to vCenter/ESX or the Virtual Machine… but it would appear that the VMTools application that appears by default in the notification area for any user logged onto the virtual machine allows ANY user to perform any actions within that app… including disconnecting devices such as IDE controllers, but more importantly for TS/XenApp servers… the network card.

There are simple ways to block this though, but it takes some effort, especially if you have lots of TS/XenApp servers!

So, there are 3 things you can do to help:

1. Hide the VMware Tools icon in the system tray.
2. Restrict access to the Control Panel applet.
3. Restrict access to the VMWareTray.exe application

I’ll talk you through each one:

Hiding the VMware Tools icon:

This unfortunately isn’t as simple as opening the tools application, and unchecking the “Show VMware Tools in the taskbar” box… this action only applies to the user performing it… not for the whole system, so, we have to manually edit the registry to get this to take effect for all users… Now, don’t forget, editing the registry without knowing what you’re doing can be very dangerous, always backup your system first…

1. Open regedit.exe
2. Browse to the following key:

HKEY_LOCAL_MACHINESoftwareVMware, Inc.VMware Tools

3. Edit the “ShowTray” subkey and change the value to a zero, click OK.

When you log back into the server, the VMware Tools icon shouldn’t display in the notification area.

Restrict Access to the Control Panel Applet:

You have several options here, this can be done as a local policy (meaning no one including the administrator can access the applet) or via a Group Policy which can be filtered to specific users, these instructions are for Windows 2008 R2, but will be very similar for Server 2003 and Server 2008 R1.

1. Open an MMC and either add the Local Policy or Group Policy Management consoles.
2. If using a Group Policy create a new policy and link it to the OU as required.
3. Browse to the following area in the policy:

User ConfigurationAdministrative TemplatesControl Panel

4. Open the “Hide Specified Control Panel items” setting.
5. Click “Enabled”, then click “Show”.
6. In the “Value” field type “VMware Tools” (no quotes). Click OK.
7. Click OK again and close the policy.
8. Reboot the server to test that the Applet is no longer accessible.

Restrict access to the executable:

Even with all of this, the user could (if you don’t restrict access to local disks) find the executable and run it, which will open the GUI for VMware Tools… shame really! So, the other options are to set the file permissions to block the user’s group from accessing these files, or at least allow administrators, domain admins, etc. and the user account that runs the VMware Tools service, and block all other users. Personally, I always hide the local disk from the users, so this part isn’t an issue for me, but there are admins out there that perhaps aren’t as “strict” as me!

And that’s it, one blocked application and no users disconnecting NIC’s and CD ROM’s etc. whilst the server is in use!