Setting up a small VMware test lab

Over the last week or so I’ve been trying to get a small VMware lab environment setup so that we could do some testing in-house as we’ve been needing some kit to test Exchange upgrades and the like before completing these actions on both our own network and our customer’s environments. This also gave me the opportunity to play around with iSCSI and the software iSCSI initiator in VMware ESXi.

Although it’s not 100% complete yet, I thought I’d share what I’ve done so far.

Kit List:

2 x Gbps Network Switches
2 x Servers with 64-bit processors and about 8GB RAM, each with 2 NICs. (I used IBM x3650’s)
2 x 8GB USB sticks
1 x “Server” with some local storage (I used a 1TB SATA hard disk and an 80GB SATA Hard Disk). This server should have 2 NICs. This could be a PC and doesn’t need to be highly spec’d.

So… once I had the kit together I went and did the following:

  1. I got it all racked up and connected one NIC in each of the three servers to each switch. One for iSCSI storage, and the other for production.
  2. I then installed the USB Sticks into each of the ESXi hosts.
  3. After downloading ESXi 4.1U1 as an ISO and burning it to a CD, I then installed it onto each of the USB sticks in the usual manner, making sure that USB was my primary boot option in the BIOS too. I also set the IP address for the data side of the ESXi networking here (VM Kernel port) so that I can start to configure these using the vSphere client.

iSCSI Setup

  1. I then had the task of setting up the third server as an iSCSI storage appliance. I’ll explain why I did this on a physical host later (rather than as a VM). So, I installed Debian 5.0 and made sure I didn’t install the “Desktop Environment” (what’s the point in having a GUI on an appliance? It’s just a waste of CPU and RAM resources).
  2. The IP addresses were then set (one on the iSCSI network and the other on the data network). You can do this in /etc/network/interfaces
  3. Then came the “difficult” bit… setting up iSCSI which I’d never done before, let alone on Debian. Firstly, I had to go and download the iSCSI apps from the Debian repositories:apt-get install iscsi-target iscsi-modules-`uname -r`Note: `uname -r` (with the “`” at each end) replaces itself with the curent kernel version number within the command line i.e. if you were running 2.6.3-444 (that’s a made up kernel edition as far as I know), the command would look like this once the uname command has been taken into account:

    apt-get install iscsi-target iscsi-modules-2.6.3-444

  4. Once that’s downloaded and installed  there are some changes that need to be made to the config files, so edit:/etc/default/iscsitarget

    You can use “nano” to edit the file:

    nano /etc/default/iscsitargetand change the line:

    ISCSITARGET_ENABLE=false

    to:

    ISCSITARGET_ENABLE=true

    If you used “nano” to do this, then type “CTRL+X” followed by “Y” then press enter to save and exit the file.

  5. You then need to get a list of all of your disks. You need to make sure that your disks don’t currently have any partitions on them. To get a list of disks/partitions use the following command:fdisk -l
  6. This will output something like the following:Example "fdisk -l" output
  7. Here, I have disk “/dev/sdc” and “/dev/sdd” both at 1TB with no partitions. The important part to remember here is the path to the disk (the /dev/sdx part).
  8. Once you have this info, you can go ahead and configure the iSCSI Target. This is done by modifing the following file:/etc/ietd.conf

    Using nano again that’s:

    nano /etc/ietd.conf

  9. Now, you need to add sections for a new target, for the new LUNs to present to that target, and if needs be, the CHAP username and password that the initiator will use to connect with. Scroll to the bottom of the file and add the following lines:Target iqn.2011-03.uk.co.spug:ESXi.iSCSI
    IncomingUser Username Password
    Lun 0 Path=/dev/sdd,Type=fileio
    Alias Backup_iSCSI
    MaxConnections 1

    To explain these options a little further:

    The “Target” line is the target name that will appear on the initiator. This name should be unique. The standard is to use the year and month that you created this, and your domain name backwards. After the colon can be pretty much anything, here I’ve chosen to depict that it’s iSCSI storage for the ESXi hosts.
    The second line is the CHAP authentication. Here you specify the username and password that the initiator will provide in order to connect to the LUNS.
    The third line is the LUN itself. This should ALWAYS start at LUN0 as per VMware’s storage guidelines. the “Path” section should contain the path to the physical disk from step 9.
    The Alias is a simple name for the target.
    Max Connections isn’t actually used on this version, but the default setting is 1 (though more than 1 connection can be initiated at a time).

  10. Save that file to accept the changes.
  11. That’s pretty much it from the iSCSI front at the moment… My next task is to see if I can enable Jumbe Framing which would enhance performance of the iSCSI storage, I’m just not sure if the switch and NICs I had lying around are capable at the moment… 🙂
  12. Then I tried to figure out how to bind iSCSI to a single NIC. I read lots of artciles stating to add a line called “OPTIONS=”-a=ip.addr.for.binding”” in the /etc/init.d/iscsistarget file underneath the line reading “DAEMON=/usr/sbin/ietd”, but I couldn’t get this to wotk correctly, and so it’s still on my “To Do” list. It’s either this, or set the allowed initiators for each target in the “/etc/initiators.allow” file to segregate it off that way, but it’s just not as “clean”!

Setting up iSCSI in ESXi:

  1. Log in to your host using the vSphere client.
  2. Go to the “Host Inventory” view.
  3. Click on the host.
  4. Click “Configuration”.
  5. Click “Networking”.
  6. Add a new vSwitch for Management and add the unused NIC. Set the IP address for the VMKernel port on your iSCSI IP range.
  7. Click “Storage Adapters”.
  8. Click “Software iSCSI initiator”.
  9. Click “Properties”.
  10. Click “Configure”.
  11. Enable the iSCSI adapter. This sets the initiators IQN name. Click OK.
  12. Click “Dynamic Discovery”.
  13. Click “Add”.
  14. Type the IP of the iSCSI appliance you built.
  15. If you enabled the incominguser options in /etc/ietd.conf then click CHAP. If you didn’t then skip to step 20.
  16. Uncheck “Inherit from Parent” under the “CHAP” section.
  17. Select “Use CHAP” in the drop down.
  18. Type the username and password you entered in /etc/ietd.conf.
  19. Click OK.
  20. Click OK.
  21. Click Close.
  22. You should be presented with an option to rescan the device. Accept this and ESXi will rescan the iSCSI initiator for LUNs.
  23. Add the newly found storage under the “Storage” settings in the usual manner, choosing Disk/LUN on the first screen.
  24. Repeat these actions on your second ESXi host.

Now it’s just a case of setting up a VM to run vCenter and attaching your ESX hosts as usual, all on iSCSI storage.

Now, I went for a physical iSCSI storage appliance, but you could use something like OpenFiler to add shared storage, and this could also be done with a VM in the same way as above. The reason I chose a seperate physical appliance was that I wanted to be able to fully test HA and DRS, and I wouldn’t have that option if my storage was on one of my physical hosts as I wouldn’t be able to turn that host off and simulate power failure and connectivity failures. If you don’t need this, then openfiler, or a purpose built virtual linux appliance, would work perfectly OK and still work for testing purposes.

The above hasn’t given me the greatest performance ever, but then I didn’t really expect that using a single 7.2K rpm SATA disk over iSCSI on a 1Gbps ethernet connection, but, it still gives me the option to test settings, environment changes and let’s me play around with different technologies without potentially damaging any production services.

Next Steps:
Investigate the binding issues
Investigate Jumbo Framing to see if this gives better performance

vApps in vSphere 4, and why they’re very, very useful

The week before last I attended the vSphere 4 Design Workshop at QA in Reading and came across something I’ve rarely actually seen in use… vApps. It’s not something that many people pay attention to I don’t think, but in all honesty, they’re pretty awesome when you think about it even for internal use, in fact, the only place I’ve seen them is when downloading pre-built appliances from the marketplace… They’ve certainly made me re-think a few things…

Imagine this:

You have several ESX hosts running a bunch of virtual machines, and for some reason the power fails in the middle of the night and the UPS systems don’t have enough power to last until you get to the office in the morning (I’m talking worst case here basically, and you should have far more protection than that ideally)…

When you come in the next morning (if you haven’t had a call in the middle of the night), and your systems are finally powered on, you’re going to have to boot each virtual machine to restore the network’s functionality, taking the usual route of Domain Controllers first, then mail servers, file servers, print servers so on and so forth until the network is operational again, each one being booted manually, or via some sort of PowerCLI script perhaps? Well, what if you could make that process 30 times easier? Well then, go take a look at vApps…

A vApp for all intents are purposes is a container of one, or more, virtual machines. BUT, what you can do with a vApp is specify boot order of the machines within that vApp… So, for instance, we all know that to boot an Exchange server we need Active Directory and DNS servers to be operational right?

Well… create a vApp, add the Domain Controllers, DNS servers and Exchange Mailbox Server, as well as the Exchange CAS server (just drag and drop them in the vCenter console). Edit the vApp’s settings and you’ll find a tab called “Start Order”. Now, here you’ll find some “Groups” and all of the VM’s you added are probably listed in their own group. Make sure that your VM’s are listed in the correct order (use the up and down arrows), so that Domain Controllers at the top, and the mailbox server at the bottom in this case. Now, if you put two machines in the same group, they’ll boot at the same time, otherwise it’s a top to bottom list (and reverse for shut down). My preference here is to change the settings for each VM so that the next machine will boot once VMware tools has loaded in the VM, so, tick the “VMware Tools are ready” check box. Whilst you’re doing this, set the “Shutdown Action” to “Guest Shutdown”.

That’s it… now that the machines are in a vApp and the start order is set, all you have to do is power on the vApp and it’ll then automatically boot each VM in turn, waiting for either 2 minutes to pass (that’s the default which can be changed) or for VMware Tools to be started by the OS. Simple huh?

Now… I hear you say “But I have power on options for when my hosts boot”… yeah, but… what happens when DRS or manual vMotion is implemented and the VM is moved to another host, oh  yeah, it loses that setting for eternity (or at least until you manually add the rule on the host again)…

Oh… and you can nest vApps too…

Taking the previous example, you may want to segregate Exchange from the Domain Controllers to allow you to easily power on or shut down each type of system separately (for maintenance for example), so just create 3 vApps: one as a “Master”, one for the Domain Controllers, and the third for the Exchange Servers. Populate the latter two with the correct virtual machines, and set the start order and shut down options as before, giving you two vApps that are independent of each other. Now, drag those vApps into the “Master” vApp and set the start order here too, with your DC’s vApp in a group higher than the Exchange Servers vApp. You don’t get the same options here, as the settings from the nested vApps will still apply. You now have an easy method to boot just the domain controllers, just the Exchange Servers, or the whole lot in one click, or shut them down in reverse order too. Nice!

That’s not the only benefit, there’s a couple more…

vApps also give you another security boundary. You can create roles that have access to specific tasks with vApps, so you can give “Power On” rights to a member of the IT Department who may not have any other access, but in an emergency, can still boot specific vApps and therefore boot the VM’s in the correct sequence.

They also have built-in resource pools, so all the usual benefits still apply here too, and yes, you can nest resource pools inside vApps too if you really want or need to!

Now, this does alter the way VM’s appear in the vCenter console, much to my own disappointment in fact. The “Hosts and Clusters” view doesn’t change much, other than the fact that each vApp becomes another level to expand in the console, but, the VM’s and Templates view is changed. Now, in the left hand pane where the VM’s used to reside, you can only see the vApps, and to see which VM’s are in which vApps you have to click on the vApp and then on the “Virtual Machines” tab. Why a vApp in this view doesn’t act as a folder I don’t know, especially when it does in the “Hosts and Clusters” view, which doesn’t usually show folders!!

From a disaster recovery scenario, and from a systems maintenance point of view, I think vApps are fantastic… Being able to boot all of my machines in one click, and also having the option to shut them all down the same way is fantastic, moving servers, or having to shut them down for electrical systems maintenance makes life easier, and that’s the whole idea of virtualization isn’t it?