Category: VMware

  • Hardware Status not displaying on vSphere Client – Fix

    Today I reinstalled my vCenter server so that I could manage both of my vSphere Hosts centrally, the installation itself went as smoothly as possible but there was one small blip.

    When connecting to the vCenter server I discovered that I couldn’t browse to the Hardware Status page but if I connected directly to the host with the vSphere client I had no issues.

    A Google search later found that this had been an issue back in 2009 but I hadn’t really found much since, the methods to fix this differed slightly with me probably down to the fact that I had installed vCenter onto a 2008 R2 SP1 installation.

    To fix the issue on a 2008 R2 installation do the following.

    1. On the vCenter server go to Start – Run – ADSI Edit

    2. Select connect to and ensure the following is selected under connection settings

    – Connection name – vCenter

    – Connection Point – Distinguished Name is selected and name is “dc=virtualcenter, dc=vmware,dc=int”

    – Computer – Server name : localhost

    – port : 389

    – Click ok to connect

    4. Once in Adsi Edit browse to

    Right click on the CN=VIMWEBSVC and choose Properties.

    Scroll Down to vmw-vc-URL

    Here we can see that this is populated with a DNS name, we are going to change it to an IP address.

    Click OK and exit out from ADSI Edit.

    At this stage I then launched Services.MSC and restarted the two following services.

    However, when trying to restart the VMware VirtualCenter Server service (which requires a restart of the Webservices service anyway) I had an issue where the VMware VirtualCenter Management Webservices service wouldn’t restart, at this point a simple reboot resolved the issues.

    Once the vCenter server had restarted (and I reconnected back via the vSphere Client) I then went to the Hardware Status Page and was presented with the following screen.

    All fixed 🙂

  • Iomega IX4 v Openfiler Performance Testing

    Running my own home based lab I had struggled to find out which storage solution was going to be the best for me, I had multiple choices with the types of storage I could use (I own the following storage enabled\capable hardware; Buffalo TeraStation Pro 2, Iomega IX4-200d 2TB and HP’s MicroServer running Openfiler 2.3).

    Over the last couple of weeks I have been carrying out various tests to see which device I would be using as my NAS\SAN solution and which device would end up being the location to store my Veeam Backups on.

    All three devices run software raid, although I am about to try and fit an IBM M1015 SAS\SATA Controller in to my HP MicroServer (with the Advanced Key to allow Raid 5 and 50)) so both the Iomega and HP were similar where raid types were concerned. The Terastation is an already operational device that has existing data on it and could only be tested using NFS, it’s never really been in contention where SAN\NAS devices for ESXi was concerned.

    Where I wasn’t sure about was whether I would be better off using RAID’s 0, 5 or 10 (obviously I am aware of the resilience issues with RAID 0 but I do have to consider the performance capacity of it as I do want to run a small VMware View lab here as well), not only was there a decision on the RAID types but also should I go down the iSCSI or NFS route as well.

    Having read a number of informative blog and forum posts I knew that to satisfy my own thirst for knowledge I was going to have to perform my own lab testing.

    Lab Setup

    OS TYPE: Windows XP SP3 VM on ESXi 4.1 using a 40gb thick provisioned disk
    CPU Count \ Ram: 1 vCPU, 512MB ram
    ESXi HOST: Lenovo TS200, 16GB RAM; 1x X3440 @ 2.5ghz  (a single ESXi 4.1 host with a single running Iometer VM was used during testing).

    STORAGE TYPE

    Iomega IX4-200d 2TB NAS, 4x 500gb,  JBOD – iSCSI, JBOD – NFS, RAID 10 – iSCSI, RAID 10 –NFS, RAID 5 – iSCSI and finally RAID5 – NFS ** Software RAID only **

    Buffalo TeraStation Pro 2, 4 x 1500gb, RAID 5 – NFS (this is an existing storage device with existing data on it so I could only test with NFS and the existing RAID set, the device isn’t iSCSI enabled).

    HP MicroServer, 2gb ram, 4 x 1500gb + the original servers 1.6tb disk for the Openfiler OS install, RAID 5 – iSCSI, RAID5 – NFS, RAID 10 – iSCSI, RAID 10 –NFS, RAID 0 – iSCSI and finally RAID 0 – NFS.

    Storage Hardware: Software based iSCSI and NFS.

    Networking: NetGear TS724T 24 x 1 GB Ethernet switch

    Iometer Test Script

    To allow for consistent results throughout the testing, the following test criteria were followed:

    1, One Windows XP SP3 with Iometer was used to monitor performance across the three platforms.

    2, I utilised the Iometer script that can be found via the VMTN Storage Performance thread here, the test script was downloaded from here.

    The Iometer script tests the following:-

    TEST NAME: Max Throughput-100%Read

    size,% of size,% reads,% random,delay,burst,align,reply

    32768,100,100,0,0,1,0,0

    TEST NAME: RealLife-60%Rand-65%Read

    size,% of size,% reads,% random,delay,burst,align,reply

    8192,100,65,60,0,1,0,0

    TEST NAME: Max Throughput-50%Read

    size,% of size,% reads,% random,delay,burst,align,reply

    32768,100,50,0,0,1,0,0

    TEST NAME: Random-8k-70%Read

    size,% of size,% reads,% random,delay,burst,align,reply

    8192,100,70,100,0,1,0,0

    Two runs for each configuration were performed to consolidate results.

    Lab Results

    After a long week or so (not only did I have to test each device twice, I also had to move the VM between devices which also took up time) I have come up with the following results.

    Iomega IX4-200D Results

    Openfiler 2.3 Results

    TeraStation Pro II Results

    Conclusions

    Having looked at the results the overall position is clear, the Iomega IX4-200D is now going to be my Veeam backup destination whilst my HP MicroServer is going to be my centralised storage host for ESXi, I now have to decide whether to go for the R0 or R10 iSCSI approach as they offer the best performance, at this stage I am tempted to go for the Raid 10 approach however because the disks in the server aren’t new. Over the next few months I will see how the reliability of the solution is and take it from there.

    One thing I can add however is that over the next couple of days I will be attempting to fit my M1015 RAID controller in there and seeing how that performs, once fitted I will re do the Openfiler tests and post an update.

  • Openfiler (running on HP Microserver) or IX4 as my iSCSI Share?

    So I have to make a decision, I am unsure on the speed of the IX4, add to that I am only using the 2tb version for iSCSI.

    My Microserver has been running Openfiler for the last couple of weeks (nothing configured, just installed), like the IX4 it’s running software raid (I have 5 SATA drives in the Microserver, 4 of them are soft raided to RAID 5 (4x1500gb)).

    I am considering using the IX4 as my backup target for Veeam Backup instead.

    Peoples thoughts??

  • My HomeLab – Setup Part 1

    It’s begun, over the weekend I started to put together my-homelab.

    In order to build up the environment detailed on my Home Labs page, I used 3 of my Lenovo TS200 servers, 2 of them with e3440 Xeon processors (ESXi) and 1 with the e3460 Xeon processor (Hyper-V). These are all quad core, eight threaded, single processors tower servers in which I’ve installed 16gb ram for the ESXi and 8gb ram for Hyper-V. Only the Hyper-V server has disks installed (4 x 750GB SATA drives in a hardware RAID5 setup (courtesy of the M1015 and Adv. Feature key) and an additional 250gb OS disk. The remaining 2 TS200’s are utilising the internal USB slot for ESXi.

    Building the Environment

    Over the weekend I finally had everything I needed to put my environment together, I wired up, plugged in and powered up a total of 9 devices that will be used in my home lab.

    3 Lenovo TS200 Servers
    1 Iomega IX4-200d 2TB NAS
    1 HP 8 Port KVM
    1 Netgear GS724T Switch
    1 HP 19in Monitor
    1 Management PC (QX9650 based gaming rig that’s been retired for 6 months)
    1 HP MicroServer

    Using instructions found in the following article “Installing ESXi 4.1 to USB Flash Drive” I pre-provisioned my 2gb Cruizer Blade USB keys with ESXi and installed them straight into the server (you have to love VMware Player )

    An additional step in configuring the environment up was to ensure that IP addressing was logical, because I will be using the entire server infrastructure on my home network I needed to ensure that I didn’t run out of network addresses (or more importantly use DHCP addresses in the server pool).

    I have configured the network up as follows.

    192.168.x.2 – 192.168.x.99 – Server and Networking Infrastructure
    192.168.x.100 – 192.168.x.150 – Workstations (DHCP)
    172.16.x.10 – 192.168.x.20 – iSCSI Traffic (and management PC)
    10.x.x.10 – 10.x.x.20 – vMotion Traffic

    The 172.16.x.x and 10.x.x.x networks are going to be vlan’d up to isolate the traffic from the rest of the network.

    Building the Storage

    Due to the my failure of increasing the disk capacity on my Iomega IX4-200d unit I have had to throw in an additional storage device, I have changed the role that my MicroServer was going to be doing (it was going to be a backup server utilising Microsoft DPM server). With that in mind I have installed OpenFiler on to the MicroServer, a nice and easy installation compared to NexentaStor (which failed to install due to my lack of CD drive as I am using the 5th SATA port as another drive).

    Both NAS devices will be configured for iSCSI services and will be used to connect both ESXi servers.

    I’ll point to the excellent post on TechHeads site on configuring OpenFiler for use with vSphere.

    The specifics for the lab is that both the OpenFiler and IX4-200d devices will be connected to the storage LAN (172.16.x.x) and not the main VM LAN. The IX4 device will be used for the VM’s whilst the OpenFiler storage will be used for the VM backups that I’ll be doing later.

    The Active Directory Domain Controller will also install directly onto the IX4 whilst the vCenter server will be installed onto the Hyper-V server (utilising DAS storage).

    Installing the Active Directory Domain Controller

    VMware’s vCenter Server requires Windows Active Directory as a means of authentication, that means that we need to have a domain controller for the lab. Steering clear of best practice (which requires at least two domain controllers for resilience) means that I am going to just install one for the moment. I sized the DC VM to be fairly small: 1 vCPU with 512MB RAM, 40GB hard disk (thin provisioned) and 1 vNIC connecting to the lab LAN.

    The setup was a standard Windows Server 2008 R2 install, followed by Windows Updates before running dcpromo.

    Host “WIN-DC01”
    Domain “MY-HOME.LAB”.

    Next up is the vCenter server. We’ll continue with that journey soon.

  • Goals for 2011

    Well folks, it’s nearly the end of 2010 and it’s been an interesting year.

    I have decided that next year I have a number of goals that I want to attain, these are.

    * Sit and pass my VCP vSphere 4 Exam
    * Improve my VMware product knowledge (VMware View in particular)
    * Produce more technical content for this site (as well as my-homelab.com)
    * Move more into the VMware Virtualisation platform market and less of the Microsoft one.
    * Become a recognised VMware blogger (I want my vExpert).
    * Continue to improve my home lab environment.

    Obviously to get some of those out of the way is going to require a lot of hardwork and dedication, hopefully I will be able to show everyone that I can do just that.

    Have a great New Year and see you on the flipside.

  • Netgear GS724Tv3

    I had been looking for a new switch to use in my home lab, I needed something that could handle jumbo frames as well as vlan’s, it also needed to be larger than 8 ports as I have a large number of machines that would be connected to it.

    I had initially looked at the Linksys (Cisco) SLM2008 with the idea that I would link them together but decided against that when I saw that there were some deals to be had with the Netgear GS724Tv3.

    After some searching online I managed to find one for £150 inc shipping so I snapped it up.

    Within 30 hours I had it delivered to me 😀

    This will be set up over the next couple of days and will replace my existing switch infrastructure (a Netgear 8 port Prosafe Un-Managed switch).

  • Various VMware Posters

    I have been downloading and printing a lot of documentation recently in preperation for my VCP, three different downloads were of various posters, these included the vreference card for vSphere 4.1 from Forbes Guthrie’s site, another one was the PowerCLI reference from the VMware Communities PowerCLI site and finally there was the Dudley Smith‘s Connections and Ports poster .

    Having tried to print these out on A3 sized paper and not liking the results I decided to get them printed professionally, I am now the proud owner of three A1 sized glossy posters which will be put up on the wall of my office this weekend 😀

  • Hard work – does it pay off?? It appears not.

    Oh well, it looks like hard work doesn’t actually pay off, I am no further forward where virtualising my company is concerned, at the top I have my CIO who has two very strong willed people giving her two very distinct views, me very heavily in favour of virtualisation and the Infrastructure manager who is VERY heavily against it.

    I now need to provide very good justifications as to why they should spend out a lot of money to virtualise the environment when the Infrastructure manager says it simply doesn’t fit our support\work model.

    With potentially a month left on my contract do I push further and risk no renewal or just sit back and accept that for whatever reason they simply won’t virtualise and leave it at that?

    Sometimes I feel like I should be banging my head against a brick wall 🙁

  • Hardwork – Does it pay off??

    Hopefully I will be finding out more today.

    I am trying to persuade work that virtualisation is definitely the route to go, senior management are buying in but unfortunately junior management aren’t. That means a convincing business case is needed.

    I sure hope that my working with various resellers is going to pay off otherwise chances are my contract won’t be renewed at the end of the year, if that’s the case however it means my VCP will be closer because I will take that time out to study for it.

    More on this later 🙂