Jul 14

Just a quick note today, I know, it’s been a while since I posted anything on here, but I’ve been extremely busy!!

I came across the need to change the “ComputerNameString” of a VM in SCVMM 2012. A VM had been created from a template, and the admin used the NetBIOS domain name for AD to add it to the domain. When the admin then tried to RDP to the server from the SCVMM console an error occurred, stating that “machinename.NetBIOSDomain” couldn’t be contacted.

Quite quickly, I got a call.

Here’s the simple answer:

Right click on the machine, and choose “Refresh”.

This setting is set when the VM is created, but will update from a refresh as long as guest services are installed. Once the refresh is completed, the “Connect via RDP” option should function again.



Hi Again! Time for part 3 of the series walking you through setting up a 2012 R2 Preview Hyper-V Cluster.

Part 1 was the prep, available here.

Part 2 setup the cluster, and that’s available here.

This time, we’re setting up some VM’s, and enabling live migration, so that we can test the cluster features.

To start, make sure the 3 VM’s we’ve configured so far are running. That’s the domain controller, and the two Hyper-V hosts. On the domain controller, copy over the 2012 R2 Preview ISO file, and copy it onto the cluster storage so that both Hyper-V hosts can access it. Then, open up Failover Cluster Manager. When adding clustered VM’s, you need to add them via Failover Cluster Manager, otherwise, they’ll be non-clustered VM’s. To add a VM, do the following:

1. In Failover Cluster Manager, right click on “Roles” and then choose “Virtual Machines” followed by “New Virtual Machine”.
2. Choose a node to create the VM on and click OK.
3. Click Next on the first screen, then give your VM a name on the second.
4. Make sure you change the location of the VM to be on your cluster storage. This will be in “C:\Cluster Storage\Volume1\”
5. If you want a backwards compatible machine, choose “Generation 1″. To see and test all the new features, choose “Generation 2″ and then click next.
6. Give the machine some RAM, and then choose whether or not to enable Dynamic Memory. Click Next.
7. Choose your external switch (or any other switch you’ve created) for it’s networking, and then click Next.
8. Create a hard disk, making sure it’s disk is less than the size of your iSCSI volume!
9. Now you can choose to install an OS later (at which point you can choose how at boot time), or to mount a CD now, or to let the machine know you’ll be booting from a network based install method. Choose the bootable image option, click Browse, locate your 2012 R2 Preview ISO file and click next.
10. Click Next.
11. Click Finish.

Now, you can boot your machine, it’ll boot from the CD and you can install the OS.

Once that’s done, you can start playing with Failover. Try switching off one of the hosts (power it off the harsh way even) and see what happens… Use the live migrate option to move the machines around.



Hi Folks,
So over in Step 1, we created our lab systems, and installed the roles we needed onto each server. We also configured the domain and joined our Hyper-V and SCOM hosts to it.

Now we’re going to create the iSCSI target on the domain controller, and create the cluster on the Hyper-V hosts.

Firstly, we need to start by configuring the iSCSI target. I’d use a dedicated hard disk for this, so add a second disk to your VM (of at least 60GB), once that’s done, open Server Manager on the DC.

Navigate through “File and Storage Services” then “Disks” under “Volumes”. You should have a disk with ID 1, which has an unknown partition, or may not even be initialised. Create a new volume on the disk, which will mark the disk with the GPT info and create a partition. Once complete, click Close.

On the left of Server Manager, now click “iSCSI”. You should see something similar to this:

iSCSI-blank

Click on “To Create an iSCSI virtual disk, start the New iSCSI Virtual Disk Wizard”. Choose your newly created volume:

iscsi-1

Give your iSCSI disk a name, this will be used for the Hyper-V cluster, for any quorum requirements. Make it a 5GB disk, and if you want to save some disk space, choose “Dynamically Expanding”. You’ll need to create a new iSCSI target, so choose this option and click next. GIve your target a name (mine’s “iSCSI-Target”) and click next. You’ll need to add some initiators, so click Add, and add them by IP. These IP’s will be those you added to your Hyper-V hosts. You do this by choosing “Enter a value for the selected type” and choosing “IP Address”:

iscsi-init

Enable CHAP auth if you want to, but I won’t be, it’s only for a lab. Once complete, click Tasks, then “New iSCSI Virtual Disk”. Run through the wizard again, this time making a 50GB disk. This will be used to store your VM’s. This time, you can choose the existing iSCSI target. You’ll end up with something like this:

iSCSI-end

This gets us to the point of iSCSI configuration completed. It’s time to setup the cluster…

We’ll need to add the Failover Clustering feature to the Hyper-V hosts, and the management tools to the DC. If you add the two Hyper-V hosts into Server Manager on the DC, then you can do all of this from one place… much easier!

Under “All Servers” right click on one of your Hyper-V hosts, and choose “Add Roles or Features”. Click next until you reach the features page. Check the box for “Failover Clustering” and click next, keep going and start the install.  Do the same for the second Hyper-V host. Because I want my Hyper-V hosts to be “Server Core” at the end, I’m also going to install the Failover Cluster feature tools onto the DC. This can be done in the same way, but choosing the tools under “Remote Server Admin Tools” then “Feature Admin Tools” in the “Add Roles and Features” wizard.

On one of the servers, open the Failover Cluster Manager tool (in the tools menu in Server Manager). Choose “Create Cluster”. Add the two Hyper-V hosts into the cluster wizard:

Cluster-Hosts

Allow the tool to run the cluster tests… if any failures appear, resolve them first. After that you’ll need to enter a Cluster Name and IP Address. The name needs to be less than 15 characters for NetBIOS purposes. Do that and click next to create the cluster.

Next, configure your iSCSI initiator, and use the quick connect option, specifying the IP of the DC. It should connect, and present the 2 disks. Do this on both of the Hyper-V hosts. Mark them both as online in Server Manager, make a volume on this disk as “Q:” (for Quorum).  Next, you need to add this as a clustered volume.

Open up Failover Cluster Manager, and navigate to the Storage->Disks section. Click “Add Disk” and choose the 5GB drive. Once that’s done we need to setup the Quorum witness. Right click on your cluster, choose “More Actions” then “Configure Cluster Quorum Settings”. Click Next, then choose “Select the Quorum Witness”, and click next again. Choose “Configure a Disk Witness” and click next. Choose the 5GB disk, and click next. At the confirmation screen, check over the settings, and click next, then click finish. You should see the disk as “Disk Witness in Quorum” like this:

Quorum
Now we can add the 50GB disk as a cluster shared volume. Back in Server Manager, create a volume on the larger disk. I’ll be using V: for mine, as it’s for my VM’s. Go back to Failover Cluster Manager and choose “Add Disk” again. Choose the 50GB disk. Once that’s added, right click the disk and choose “Add to Cluster Shared Volumes”. This enables the disk for cluster use.

Enabling Live Migration… Still within Faillover Cluster Manager, choose the “Networks” node. Click “Live Migration Settings” and choose the LAN connection for Live Migration to use. I segregated this with my iSCSI traffic (by using a second network card purely for iSCSI) and wanted to make sure that Live Migration and iSCSI didn’t co-exist on the same LAN segment.

Next we need to add an external switch to each of the Hyper-V hosts. These need to have the same name. Open up Hyper-V Manager, and choose “Virtual Switch Manager”. Click to add a new virtual switch, and choose an external switch. Make sure you choose the NIC of the LAN NIC, and ensure you choose to “Allow management operating system to share this network adapter”. Click OK, and then click Yes to the warning. Complete the exact same actions on the second host, remembering to give them the exact same name.

That’s it! Cluster is built, Hyper-V is ready to run a VM…

Last piece of Part 2 then… removing the GUI from the Hyper-V hosts. This is something new to 2012 Server, and can be done via Powershell, or via the “Remove Roles and Features” wizard in Server Manager. Run through the wizard, selecting the host until you find the Features page. Uncheck “Graphical Management Tools and Infrastructure” and “Server Graphical Shell”. Removing just the latter will still give you a minimal graphical interface, with no Internet Explorer, or File Explorer amongst other items. Removing the first will put the server back to “Server Core” mode. NOTE: Removing the first will remove any additional RSAT tools you’ve installed (such as Hyper-V Manager and Failover Cluster Manager). It will NOT stop these items from working, just removes the consoles for local admin of those roles. It will also remove the Windows PowerShell ISE tool.

Over to part 3 for adding a VM and testing failover… and then part 4 will contain the installation for System Center 2012 R2… both coming soon!



NOTE: This post has been updated, please see the update near the bottom.

Hello out there!

This may seem like an odd post for my blog, considering I’m a VMware VCP, and this blog has had most posts written about VMware, but there’s going to be a lot more Hyper-V related posts coming… so watch this space!

As always, starting a lab is a long winded project. Machines need to be built, re-named, addressed, domains need to be created, members added and all of those server roles, apps and updates have to be installed and configured.

Well, this lab is going to have a lot going on… The overview here is that I’m taking a look at the Windows Server 2012 R2 technologies, based on the preview version available from Microsoft. Mostly this will be Hyper-V 2012 R2, mixed with System Center 2012 R2, Virtual Machine Manager and Operations Manager. I may, or may not for now, add the Windows Azure Pack, which is looking like a self-service portal, and part of Microsoft’s “Cloud OS” strategy.

So, here’s the overview of what’s going to be created:

1 x Windows Server 2012 R2 Preview Domain Controller also running the iSCSI target service.
2 x Windows Server 2012 R2 Preview Hyper-V hosts (eventually running Server Core mode).
1 x Windows Server 2012 R2 Preview host, running System Center 2012 R2 for Operations Manager and Virtual Machine Manager.

This will all be running from my Lenovo ThinkPad W530 laptop (16GB RAM, Intel i7 Quad core CPU and 250GB SSD)… with Windows 8, and the built in Hyper-V feature. If you don’t have that feature installed, it can be added using the features options under “Programs and Features” in the Control Panel. Just check the top level box, run the install and reboot a couple of times.

That’s the first, easy step done :)

Next, we need to download Windows Server 2012 R2 (just search for the preview download in your favourite engine). Sign up, and download the VHD copy with a GUI (to make life easier to start with). Once you’ve got that, extract it to a memorable location. Do the same for System Center 2012 R2 Preview.

Now we need to create a virtual switch… Open up Hyper-V manager, and click “Virtual Switch Manager”. Choose “Internal” and click “Create Virtual Switch”. Give your switch a name, and click OK. If you don’t know already, an internal switch won’t allow your VM’s to use the actual network. You’d need an “External” switch for that. Because I like to keep my lab environments separate, I chose an Internal switch.

In Hyper-V Manager, right click on your computer name. Create a new machine, with the spec you want (using the switch you just created) and using one of the VHD files for it’s first disk. Perform any global changes you want to make to all of your machines, for example, run Windows Update, change the admin password (it’s “R2Preview!” by the way), etc. etc.

Now, shut down the VM, and copy it’s VHD three times. Make 3 more VM’s via Hyper-V manager, each using a copy of the first VM’s disk. The 4 in total will make up the lab environment.

Once you’ve got your 4 machines, boot the first. This will be the 2012 domain controller. RUN SYSPREP! Make sure you choose the “generalize” option and reboot. Give it a decent name, and a static IP address. In the same way you do in Server 2012 R1, install AD Domain Services, DNS Server and the iSCSI target roles & features, go ahead and install the Hyper-V management tools at the same time. Configure AD at the end of the installer, and reboot as necessary.

Now, boot up your first Hyper-V host and run sysprep again. Give it a static IP address, and join it to the domain. Install the Hyper-V role, and reboot, DO NOT create a virtual switch. This will be setup for the child VM’s whilst we configure Hyper-V clustering. Do the same with your second Hyper-V host.

Lastly, perform the same actions with the last machine (ensure sysprep is run!), but power it off when it’s rebooted… it’ll be a while before we use this machine.

That’s the prep work done, you’ve a working AD environment, and two Hyper-V hosts. You’ve also prepared a system ready for the System Center 2012 R2 installation.

UPDATE: Seems I was pre-emptive in my plan here… you can’t run Hyper-V in a Hyper-V’d VM and get a VM to boot… so I’ve had to revert back to using VMware Workstation to create the first level of VM’s, which meant removing Hyper-V from my laptop. It’s a shame really, and something I hope Microsoft resolve in the future (maybe with Windows 8.1?), as creating this sort of lab environment is very common, and it would be nice not to have to run a 3rd party hypervisor on my laptop!

I’ll see you over at Part 2… (coming soon).



Hello again, it’s been a while, but this little nugget of information couldn’t wait, I want to keep it somewhere I can get to it quickly again!

Having performed many migrations between servers operating systems over the years, DHCP has always been a pain. When moving between differing versions of Windows, the backup/restore process doesn’t work, especially when you add in the change between x86 and x64. Well, there is an easier way which I’ve stumbled across today:

Install the DHCP Server role on the new 2012 Server, but don’t Authorize it.
On the Windows 2003 DHCP Server, open a command prompt and type: netsh dhcp server export C:\dhcp.txt all
Copy the file to the new 2012 Server.
On the new 2012 server, open a command prompt and type: netsh dhcp server import C:\dhcp.txt all
Open the DHCP console on the new server and authorize the server with Active Directory.
Stop the DHCP service on the old server, and disable it.

Now that’s much easier than manually re-creating a DHCP scope right?!


Apr 19

UPDATE: Checked with Exchange 2010, and this also resolves the issue.

Today I was dealing with a customer who was receiving errors in Outlook for autodiscover each time they opened Outlook. This was all after they’d changed their domain name.

I’d run a script I use to alter all of the host names, to check that they were all correct and then spent considerable time hunting around Exchange for this incorrect setting, and eventually found it.

It’s in the OutlookAnywhere settings, rather than in all of the other URL’s my other script alters. Usually, this URL doesn’t change, and so I’d missed it, but as they’d changed their domain name, this also needed alteration.

So, if you get SSL mismatch errors, and you think you’ve changed all of the necessary URL’s, check this one:

get-outlookanywhere | select Identity, ExternalHostName

if the External Host Name is incorrect, then adjust it with this:

get-outlookanywhere | set-outlookanywhere -externalhostname "insert.hostname.here"

This certainly works with Exchange 2007, I’ve not yet checked it on Exchange 2010, but believe it’s the same cmdlets.

I’ll write up a full blog post this weekend with a list of all of the places that need to be changed, so that it’s all in one place.


Apr 18

A short post today, just to find all MS Exchange certificates which have expired, filtered by domain name, and remove them from your exchange server. This should work with Microsoft Exchange 2007 and 2010.

$Domain = read-host "Enter Domain Name to Search for (e.g. webmail.domain.tld)"

Get-ExchangeCertificate | where {$_.NotAfter -lt (get-date) -and $_.Subject -like "*$Domain*"} | Remove-ExchangeCertificate -confirm: $false


Wow, first blog article for quite some time! Time has been stretched over the last year or so with family and new work commitments, so something had to slip! So, hopefully this is the start of me finding more time to blog! I’ve been working on plenty of scripts and other bits and pieces that’ll make some good articles, so fingers crossed they’ll be blogged soon!

I’ve been delving more and more into the world of performance monitoring with relation to VMware vSphere, and CPU Ready times has always been a topic of heated conversation at work… people over commit CPU resource as if it’s free, but don’t realise the consequences.

To prove a point I’ve made an example of an Exchange server. It runs for a business of about 20 users, running Exchange 2010. They also use Lync, and SharePoint, so there’s some integration going on too. It’s a fairly busy machine, and was configured with 4 virtual CPU’s, and a load of RAM (12GB). I’d argued the configuration of machines like this for some time, trying to explain that more CPU’s may mean fewer CPU time for the VM, but it was falling on deaf ears, so, I decided it was time to make a change, and prove a point :)

Now, for a very simple overview…

In case you don’t know how CPU scheduling works, regardless of the number of vCPU’s granted, or their workloads, ALL vCPU’s must be scheduled to run on pCPU’s at the same time, even if the vCPU would be idle. So, if you have 4 pCPU’s, and 3 VM’s with a single pCPU, all is OK, each virtual machine can always get CPU resource, as there will always be 3 CPU’s available. Add in a virtual machine with 2 vCPU’s, and immediately you’d need 5 pCPU’s for all machines to always get pCPU time. Luckily, the VMware scheduler will deal with this and queue pCPU requests. As our new machine will always need time on 2 pCPU’s, it’s “easier” for VMware to schedule pCOU time to the VM’s with 1 vCPU, so they’ll end up getting more CPU time than the 2 vCPU VM. This waiting time, is what’s known as CPU Ready time, and when this get’s too high, you’ll find your VM’s with more vCPU’s will get slower…

Here’s an example:

This is the previously mentioned Exchange server, with 4 vCPU’s. It’s a one hour capture of both CPU Usage, and CPU Ready time:

EX02 4 vCPU

As you can see, CPU ready time was anywhere between 180ms and 1455ms, averaging 565ms. This lead to slow CPU response for the machine.

So, looking at the average CPU usage for a couple of months, it was at ~30%. So that’s 30% of 4 CPU’s.. just over a single CPU. So, 2 vCPU’s needed to be removed… and this is the result:

EX02 with 2 vCPU

So, the result? CPU ready time was between 28ms and 578ms, a vast improvement, and averaged just 86ms, far better than 565ms! CPU usage was higher, but then it’s now using more of the CPU’s it’s granted, so this was to be expected.

Now, CPU Ready time on this machine still isn’t great, but I’ve a lot more VM’s to sort through, reducing vCPU allocation, and hopefully it’ll just get better!


Aug 15

I’ve been struggling with the performance on a ZoneMinder based CCTV virtual machine for the last couple of weeks. I’ve been receiving alerts for CPU useage for this from the vCenter server, and today, I’ve finally made some progress thanks to some hunting of the ZoneMinder forums.

I found this article:
http://www.zoneminder.com/forums/viewtopic.php?f=5&t=6419

Which suggests that changing the libjpeg libraries with those in the forum, allow the library to use the MMX features of the CPU and thus process the mjpeg video’s more efficiently.

To sum it up, the load values on this VM (running Debian Squeeze) were at roughly 7.53, 6.78 and 6.65, with the CPU at 0.00%id. After running the commands below, the CPU idle time is averaging at roughly 20%, and the load after an hour or so is now at 2.46, 2.23 and 2.35, so a very effective improvement.

Here’s the bash commands I ran to get the replacement jpeg library installed:

mkdir /usr/src/libjpeg-simd
cd /usr/src/libjpeg-simd
wget http://cetus.sakura.ne.jp/softlab/jpeg-x86simd/sources/jpegsrc-6b-x86simd-1.02.tar.gz
tar xzvf jpegsrc-6b-x86simd-1.02.tar.gz
apt-get update
apt-get install build-essential
apt-get install nasm
cd j*
./configure --enable-shared
make
/etc/init.d/zoneminder stop
make install
ldconfig
/etc/init.d/zoneminder start

Jun 25

I had the need to automate moving about 50 ISO files from one datastore to another during a storage array migration a short while ago, so I wanted to share this script with you all in case you ever find the need for this or similar.

It’s rather simple, and you just need to edit this with the names of your datastores and folder structure (top folder only):

#Set's Old Datastore
$oldds = get-datastore "Old Datastore Name"

#Set's New Datastore
$newds = get-datastore "New Datastore Name"

#Set's ISO Folder Location
$ISOloc = "Subfolder_Name\"

#Map Drives
new-psdrive -Location $oldds -Name olddrive -PSProvider VimDatastore -Root "\"
new-psdrive -Location $newds -Name newdrive -PSProvider VimDatastore -Root "\"
#Copies Files from Old to New
copy-datastoreitem -recurse -item olddrive:\$ISOloc* newdrive:\$ISOloc

Line 1: Change the script to have the name of the datastore you are moving the files FROM.
Line 5: Change the script to have the name of the datastore you are moving the files TO.
Line 8: Change the script to have the name of your ISO subdirectory. Do not remove the “\” unless you have no subfolder.
Lines 11 & 12: Maps PowerShell drives to those datastores.
Line 14: Copies the files.


%d bloggers like this: