Posts Tagged ‘vsphere 4’


Wow, first blog article for quite some time! Time has been stretched over the last year or so with family and new work commitments, so something had to slip! So, hopefully this is the start of me finding more time to blog! I’ve been working on plenty of scripts and other bits and pieces that’ll make some good articles, so fingers crossed they’ll be blogged soon!

I’ve been delving more and more into the world of performance monitoring with relation to VMware vSphere, and CPU Ready times has always been a topic of heated conversation at work… people over commit CPU resource as if it’s free, but don’t realise the consequences.

To prove a point I’ve made an example of an Exchange server. It runs for a business of about 20 users, running Exchange 2010. They also use Lync, and SharePoint, so there’s some integration going on too. It’s a fairly busy machine, and was configured with 4 virtual CPU’s, and a load of RAM (12GB). I’d argued the configuration of machines like this for some time, trying to explain that more CPU’s may mean fewer CPU time for the VM, but it was falling on deaf ears, so, I decided it was time to make a change, and prove a point :)

Now, for a very simple overview…

In case you don’t know how CPU scheduling works, regardless of the number of vCPU’s granted, or their workloads, ALL vCPU’s must be scheduled to run on pCPU’s at the same time, even if the vCPU would be idle. So, if you have 4 pCPU’s, and 3 VM’s with a single pCPU, all is OK, each virtual machine can always get CPU resource, as there will always be 3 CPU’s available. Add in a virtual machine with 2 vCPU’s, and immediately you’d need 5 pCPU’s for all machines to always get pCPU time. Luckily, the VMware scheduler will deal with this and queue pCPU requests. As our new machine will always need time on 2 pCPU’s, it’s “easier” for VMware to schedule pCOU time to the VM’s with 1 vCPU, so they’ll end up getting more CPU time than the 2 vCPU VM. This waiting time, is what’s known as CPU Ready time, and when this get’s too high, you’ll find your VM’s with more vCPU’s will get slower…

Here’s an example:

This is the previously mentioned Exchange server, with 4 vCPU’s. It’s a one hour capture of both CPU Usage, and CPU Ready time:

EX02 4 vCPU

As you can see, CPU ready time was anywhere between 180ms and 1455ms, averaging 565ms. This lead to slow CPU response for the machine.

So, looking at the average CPU usage for a couple of months, it was at ~30%. So that’s 30% of 4 CPU’s.. just over a single CPU. So, 2 vCPU’s needed to be removed… and this is the result:

EX02 with 2 vCPU

So, the result? CPU ready time was between 28ms and 578ms, a vast improvement, and averaged just 86ms, far better than 565ms! CPU usage was higher, but then it’s now using more of the CPU’s it’s granted, so this was to be expected.

Now, CPU Ready time on this machine still isn’t great, but I’ve a lot more VM’s to sort through, reducing vCPU allocation, and hopefully it’ll just get better!


Mar 15

Over the last week or so I’ve been trying to get a small VMware lab environment setup so that we could do some testing in-house as we’ve been needing some kit to test Exchange upgrades and the like before completing these actions on both our own network and our customer’s environments. This also gave me the opportunity to play around with iSCSI and the software iSCSI initiator in VMware ESXi.

Although it’s not 100% complete yet, I thought I’d share what I’ve done so far.

Kit List:

2 x Gbps Network Switches
2 x Servers with 64-bit processors and about 8GB RAM, each with 2 NICs. (I used IBM x3650′s)
2 x 8GB USB sticks
1 x “Server” with some local storage (I used a 1TB SATA hard disk and an 80GB SATA Hard Disk). This server should have 2 NICs. This could be a PC and doesn’t need to be highly spec’d.

So… once I had the kit together I went and did the following:

  1. I got it all racked up and connected one NIC in each of the three servers to each switch. One for iSCSI storage, and the other for production.
  2. I then installed the USB Sticks into each of the ESXi hosts.
  3. After downloading ESXi 4.1U1 as an ISO and burning it to a CD, I then installed it onto each of the USB sticks in the usual manner, making sure that USB was my primary boot option in the BIOS too. I also set the IP address for the data side of the ESXi networking here (VM Kernel port) so that I can start to configure these using the vSphere client.

iSCSI Setup

  1. I then had the task of setting up the third server as an iSCSI storage appliance. I’ll explain why I did this on a physical host later (rather than as a VM). So, I installed Debian 5.0 and made sure I didn’t install the “Desktop Environment” (what’s the point in having a GUI on an appliance? It’s just a waste of CPU and RAM resources).
  2. The IP addresses were then set (one on the iSCSI network and the other on the data network). You can do this in /etc/network/interfaces
  3. Then came the “difficult” bit… setting up iSCSI which I’d never done before, let alone on Debian. Firstly, I had to go and download the iSCSI apps from the Debian repositories:apt-get install iscsi-target iscsi-modules-`uname -r`Note: `uname -r` (with the “`” at each end) replaces itself with the curent kernel version number within the command line i.e. if you were running 2.6.3-444 (that’s a made up kernel edition as far as I know), the command would look like this once the uname command has been taken into account:

    apt-get install iscsi-target iscsi-modules-2.6.3-444

  4. Once that’s downloaded and installed  there are some changes that need to be made to the config files, so edit:/etc/default/iscsitarget

    You can use “nano” to edit the file:

    nano /etc/default/iscsitargetand change the line:

    ISCSITARGET_ENABLE=false

    to:

    ISCSITARGET_ENABLE=true

    If you used “nano” to do this, then type “CTRL+X” followed by “Y” then press enter to save and exit the file.

  5. You then need to get a list of all of your disks. You need to make sure that your disks don’t currently have any partitions on them. To get a list of disks/partitions use the following command:fdisk -l
  6. This will output something like the following:Example "fdisk -l" output
  7. Here, I have disk “/dev/sdc” and “/dev/sdd” both at 1TB with no partitions. The important part to remember here is the path to the disk (the /dev/sdx part).
  8. Once you have this info, you can go ahead and configure the iSCSI Target. This is done by modifing the following file:/etc/ietd.conf

    Using nano again that’s:

    nano /etc/ietd.conf

  9. Now, you need to add sections for a new target, for the new LUNs to present to that target, and if needs be, the CHAP username and password that the initiator will use to connect with. Scroll to the bottom of the file and add the following lines:Target iqn.2011-03.uk.co.spug:ESXi.iSCSI
    IncomingUser Username Password
    Lun 0 Path=/dev/sdd,Type=fileio
    Alias Backup_iSCSI
    MaxConnections 1

    To explain these options a little further:

    The “Target” line is the target name that will appear on the initiator. This name should be unique. The standard is to use the year and month that you created this, and your domain name backwards. After the colon can be pretty much anything, here I’ve chosen to depict that it’s iSCSI storage for the ESXi hosts.
    The second line is the CHAP authentication. Here you specify the username and password that the initiator will provide in order to connect to the LUNS.
    The third line is the LUN itself. This should ALWAYS start at LUN0 as per VMware’s storage guidelines. the “Path” section should contain the path to the physical disk from step 9.
    The Alias is a simple name for the target.
    Max Connections isn’t actually used on this version, but the default setting is 1 (though more than 1 connection can be initiated at a time).

  10. Save that file to accept the changes.
  11. That’s pretty much it from the iSCSI front at the moment… My next task is to see if I can enable Jumbe Framing which would enhance performance of the iSCSI storage, I’m just not sure if the switch and NICs I had lying around are capable at the moment… :-)
  12. Then I tried to figure out how to bind iSCSI to a single NIC. I read lots of artciles stating to add a line called “OPTIONS=”-a=ip.addr.for.binding”" in the /etc/init.d/iscsistarget file underneath the line reading “DAEMON=/usr/sbin/ietd”, but I couldn’t get this to wotk correctly, and so it’s still on my “To Do” list. It’s either this, or set the allowed initiators for each target in the “/etc/initiators.allow” file to segregate it off that way, but it’s just not as “clean”!

Setting up iSCSI in ESXi:

  1. Log in to your host using the vSphere client.
  2. Go to the “Host Inventory” view.
  3. Click on the host.
  4. Click “Configuration”.
  5. Click “Networking”.
  6. Add a new vSwitch for Management and add the unused NIC. Set the IP address for the VMKernel port on your iSCSI IP range.
  7. Click “Storage Adapters”.
  8. Click “Software iSCSI initiator”.
  9. Click “Properties”.
  10. Click “Configure”.
  11. Enable the iSCSI adapter. This sets the initiators IQN name. Click OK.
  12. Click “Dynamic Discovery”.
  13. Click “Add”.
  14. Type the IP of the iSCSI appliance you built.
  15. If you enabled the incominguser options in /etc/ietd.conf then click CHAP. If you didn’t then skip to step 20.
  16. Uncheck “Inherit from Parent” under the “CHAP” section.
  17. Select “Use CHAP” in the drop down.
  18. Type the username and password you entered in /etc/ietd.conf.
  19. Click OK.
  20. Click OK.
  21. Click Close.
  22. You should be presented with an option to rescan the device. Accept this and ESXi will rescan the iSCSI initiator for LUNs.
  23. Add the newly found storage under the “Storage” settings in the usual manner, choosing Disk/LUN on the first screen.
  24. Repeat these actions on your second ESXi host.

Now it’s just a case of setting up a VM to run vCenter and attaching your ESX hosts as usual, all on iSCSI storage.

Now, I went for a physical iSCSI storage appliance, but you could use something like OpenFiler to add shared storage, and this could also be done with a VM in the same way as above. The reason I chose a seperate physical appliance was that I wanted to be able to fully test HA and DRS, and I wouldn’t have that option if my storage was on one of my physical hosts as I wouldn’t be able to turn that host off and simulate power failure and connectivity failures. If you don’t need this, then openfiler, or a purpose built virtual linux appliance, would work perfectly OK and still work for testing purposes.

The above hasn’t given me the greatest performance ever, but then I didn’t really expect that using a single 7.2K rpm SATA disk over iSCSI on a 1Gbps ethernet connection, but, it still gives me the option to test settings, environment changes and let’s me play around with different technologies without potentially damaging any production services.

Next Steps:
Investigate the binding issues
Investigate Jumbo Framing to see if this gives better performance


%d bloggers like this: