Category: Virtualisation

  • Iomega IX4 Remote Access…. or not!!

    The Iomega IX4 is a handy little unit that comes bundled with a whole load of loveliness, well actually it would do if the loveliness actually worked correctly.

    Over the weekend I purchased a subscription from TZO to allow for remote access to my IX4 8TB unit. Included in that purchase was a 1 year SSL subsciption for the domain name My-HomeNas.com, unfortunately said SSL certificate has never downloaded to the NAS. I am now in the position where not only does the NAS throw up errors all the time but I also can’t actually re-enable Remote Access on it.

    Everytime I try to enable Remote Access using the Paid Subscription (Already Own) settings the NAS will just sit there with a processing screen. A support desk entry over at TZO resulted in a reply and the ticket being closed (even tho the email asked me to reply to them when the NAS had been rebooted) and it left me with the task of wiping the NAS again to see if that allowed me to configure Remote Access again.

    I am currently left with a NAS device that’s trying to rebuild the Disk Structure (clearing the config on the unit) but unfortunately that appears as if it’s going to take days to complete.

    Overall I am not that impressed with the Iomega NAS line, so far I had my first unit turn up DOA, the second one wouldn’t let me upgrade the disks (I know, it’s not factory supported anyway) and now the 8tb unit won’t actually let me enable Remote Access successfully. The only time it did work over the weekend (with the certificate error) the device kept stalling when trying to remotely access a 50mb file (whereas using the same net connection on a WHS server allowed the file to be retrieved very quickly (I love my 50\5mb Up\Down speeds :D)).

  • TS200 Server – Kernel Debug errors 🙁

    Well having a single Win 2008 R2 server running Hyper-V is proving to be a bit of a problem for me.

    Over the last couple of weeks I have installed my virtualised environment onto three of my four TS200 servers. I am running 2 ESXi boxes with the third machine running Win 2008 R2 with my DC running on the ESXi cluster and my vCenter server running on my Hyper-V machine.

    All was good for the first couple of days, however over the weekend I started experiencing issues with my vSphere client dropping connection to the VC, upon closer examination I have discovered that my Hyper-V box is experiencing Kernel Debugs and powercycling, it’s doing this every 24 – 36 hours and I haven’t managed to figure out why (that may be a lie as I have seen some disk errors in the event logs that may be the cause but a little investigation so far shows no problems with the disks).

    I may well have to trash the Hyper-V side of the server and move everything over to vSphere (which I know I should have done in the first place).

    First things first however, I will be moving the vCenter server off my RAID 5 array (hardware raid using the Lenovo\LSI Raid Controller with the Hardware Feature Key) and see if it’s a problem with the array (will remove the controller from the server in it’s entirety) and power up the server for a couple of days.

  • Openfiler (running on HP Microserver) or IX4 as my iSCSI Share?

    So I have to make a decision, I am unsure on the speed of the IX4, add to that I am only using the 2tb version for iSCSI.

    My Microserver has been running Openfiler for the last couple of weeks (nothing configured, just installed), like the IX4 it’s running software raid (I have 5 SATA drives in the Microserver, 4 of them are soft raided to RAID 5 (4x1500gb)).

    I am considering using the IX4 as my backup target for Veeam Backup instead.

    Peoples thoughts??

  • My HomeLab – Setup Part 1

    It’s begun, over the weekend I started to put together my-homelab.

    In order to build up the environment detailed on my Home Labs page, I used 3 of my Lenovo TS200 servers, 2 of them with e3440 Xeon processors (ESXi) and 1 with the e3460 Xeon processor (Hyper-V). These are all quad core, eight threaded, single processors tower servers in which I’ve installed 16gb ram for the ESXi and 8gb ram for Hyper-V. Only the Hyper-V server has disks installed (4 x 750GB SATA drives in a hardware RAID5 setup (courtesy of the M1015 and Adv. Feature key) and an additional 250gb OS disk. The remaining 2 TS200’s are utilising the internal USB slot for ESXi.

    Building the Environment

    Over the weekend I finally had everything I needed to put my environment together, I wired up, plugged in and powered up a total of 9 devices that will be used in my home lab.

    3 Lenovo TS200 Servers
    1 Iomega IX4-200d 2TB NAS
    1 HP 8 Port KVM
    1 Netgear GS724T Switch
    1 HP 19in Monitor
    1 Management PC (QX9650 based gaming rig that’s been retired for 6 months)
    1 HP MicroServer

    Using instructions found in the following article “Installing ESXi 4.1 to USB Flash Drive” I pre-provisioned my 2gb Cruizer Blade USB keys with ESXi and installed them straight into the server (you have to love VMware Player )

    An additional step in configuring the environment up was to ensure that IP addressing was logical, because I will be using the entire server infrastructure on my home network I needed to ensure that I didn’t run out of network addresses (or more importantly use DHCP addresses in the server pool).

    I have configured the network up as follows.

    192.168.x.2 – 192.168.x.99 – Server and Networking Infrastructure
    192.168.x.100 – 192.168.x.150 – Workstations (DHCP)
    172.16.x.10 – 192.168.x.20 – iSCSI Traffic (and management PC)
    10.x.x.10 – 10.x.x.20 – vMotion Traffic

    The 172.16.x.x and 10.x.x.x networks are going to be vlan’d up to isolate the traffic from the rest of the network.

    Building the Storage

    Due to the my failure of increasing the disk capacity on my Iomega IX4-200d unit I have had to throw in an additional storage device, I have changed the role that my MicroServer was going to be doing (it was going to be a backup server utilising Microsoft DPM server). With that in mind I have installed OpenFiler on to the MicroServer, a nice and easy installation compared to NexentaStor (which failed to install due to my lack of CD drive as I am using the 5th SATA port as another drive).

    Both NAS devices will be configured for iSCSI services and will be used to connect both ESXi servers.

    I’ll point to the excellent post on TechHeads site on configuring OpenFiler for use with vSphere.

    The specifics for the lab is that both the OpenFiler and IX4-200d devices will be connected to the storage LAN (172.16.x.x) and not the main VM LAN. The IX4 device will be used for the VM’s whilst the OpenFiler storage will be used for the VM backups that I’ll be doing later.

    The Active Directory Domain Controller will also install directly onto the IX4 whilst the vCenter server will be installed onto the Hyper-V server (utilising DAS storage).

    Installing the Active Directory Domain Controller

    VMware’s vCenter Server requires Windows Active Directory as a means of authentication, that means that we need to have a domain controller for the lab. Steering clear of best practice (which requires at least two domain controllers for resilience) means that I am going to just install one for the moment. I sized the DC VM to be fairly small: 1 vCPU with 512MB RAM, 40GB hard disk (thin provisioned) and 1 vNIC connecting to the lab LAN.

    The setup was a standard Windows Server 2008 R2 install, followed by Windows Updates before running dcpromo.

    Host “WIN-DC01”
    Domain “MY-HOME.LAB”.

    Next up is the vCenter server. We’ll continue with that journey soon.

  • Iomega… are you firmware locking your IX4’s? – No, they aren’t

    Having decided that the 1.2tb of useable space (after applying iSCSI to it) I decided that as I had 4 1.5tb drives lying around I would see about upgrading the IX4 from 2tb to 6tb (I already have an 8tb unit used for file storage).

    Now I am not the type of person who generally does something on the fly (especially where £400 of new hardware is concerned) so after a lot of searching the web for definitive answers on whether it was possible to upgrade the drives in the IX4, officially not but some have done so. Following the instructions I saw (and have actually used when upgrading my Buffalo TeraStation Pro II from 1tb to 4 and then 6tb) which instructed me to convert the NAS to JBOD (so that I didn’t have to wait after installing each drive for it to get the RAID set back in sync), once converted I was to replace one drive at a time and power it on, update the device via the web front end and then move on to the next disk.

    All was good with the first three disks but when it came to the last disk the IX4 decided to give up the ghost, I now had 2 flashing lights on the front of the NAS and one of them was RED whilst the other was White. I also had a nasty looking graphic on the display. Now by now most people would have just replaced the original drives back into the unit but…. having been a bit of an idiot I hadn’t marked out which drives were which (later you will actually see this doesn’t matter but at the time I didn’t know that), not only that but I had also started formatting the drives and putting them into my HP Microserver and that left me up a certain creek without a paddle.

    I then spent a week with various pieces of software (Acronis Home 2010 and 2011 (cheap upgrade)), Parted and PMagic) trying to see if I could clone one of the first disks in the drive (the composition of each drive is that it has two different partitions, the first partition is only 2gb (1.9 actually) in size, this is the one with the EMC Lifeline OS whilst the other partition takes up the rest of the space to give you your data partition. Unfortunately for me I was having difficulties with cloning the drive, either I didn’t have enough space (so I used the 4th disk and used that to store the image on for the other three drives) or it simply took too long (the second partition wanted a byte by byte copy and took over 8 hours to complete). Finally I completed the copy and cloned that back onto the previous three drives, put them back into the unit… and got the same graphic. Obviously something was very wrong.
    My next step was quite drastic but as I had been trying now for 4 days to get my bricked unit up and running and I wasn’t getting any closer it was time for drastic measures. My 8tb unit has been in use for a couple of weeks now and is the primary data storage for all of my data (some 3.5tb worth), on my old network it had taken over a week to copy (no jumboframes enabled switch or NIC on the machine) and as I like to keep two copies of said data (the backup for the 8tb unit is my 6tb Terastation Pro II) I was a tad concerned that if I encountered a failure on the Buffalo I was in even more trouble (yes I know, a decent backup solution is called for.. but 3.5tb is a lot to backup and that’s why I run two NAS devices, I only ever write to one, that data is then replicated over to the second one ever 2 hours). With a heavy heart I removed all my precious shares and deleted the raid array. I was now left with 7.2tb on NAS. On powering down the drive I took it out and again using the previously mentioned tools tried to copy each of the 2tb drives so that I could then try and shrink them down onto 1.5tb drives. 8 hours later I had my cloned image with which I started to apply to the smaller 1.5tb drives, once both partitions were on the smaller disks (and having resized the data partitions to fit on the smaller drives) I then copied the 2gb partition from all four of the 2tb drives and applied them to each drive (1-1, 2-2 etc), once applied I installed the drives back into the IX4 chassis.

    This is where the fun really started 🙁

    Suprisingly enough the IX4 booted up, obviously it now thought it was my 8tb unit (thankfully that was currently powered off), amazingly it looked like I had de-bricked my IX4, I had a device that would power up and would also re-start when I changed the name and IP address of the unit (I wanted to get the 8tb unit up and the raid array back so I could replicate my data onto it). During the course of rebooting the 6tb unit I had kicked off the creation of the raid array on the 8tb unit, luckily for me the web front end was providing feedback on the fact that the array was “Data Protection is being reconstructed” and my Blue drive lights were going like the clappers. I then kicked off the same thing for the 6tb unit but for some reason I didn’t get the same behaviour, I was still getting the blue drive light flashing like mad but instead of the unit giving me an increase in % complete it was staying at 0 (I was at my desk for a couple of hours and by the time the 8tb unit had 7% done the 6tb was still at 0, however I decided to leave it overnight just in case.

    Morning came and the 8tb unit was now at 52% complete whilst the 6tb unit was still showing 0%, not good :(, knowing that I would be out of the house for the next 12 hours I hoped that in that time everything would sort it self out…

    Tonight I got home to find my 8tb unit sitting happily with a new raid 5 array on it whilst the 6tb unit was now sitting at a loading screen showing 95% loaded, it was unresponsive and I was now starting to get really peed off.

    Destroying the raid 5 array (again) I inserted the 1.5tb disks back into the 8tb unit to get the OS back on them and tried again, I went through the process of renaming and changing the IP address and each time the unit powered on fine but EVERY time I tried converting the JBOD disks to RAID the unit would brick itself.

    I was sure that something strange was going on, I know for a fact that those 1.5tb drives are good, they have been used successfully in the past in another NAS unit (I had owned 2 Terastations) and I knew from SMART that there was nothing wrong with any of the drives so being desperate I thought I would see what would happen if I put the original drives back into the IX4, so pulling the already blank 500gb drives from my HP Microserver I put three of them into the 6tb unit, the last one I placed back into the 8tb unit to get the OS installed on to it, I then took that disk and put it into slot 1 of the IX4 and powered it up, again I changed the name and IP address and rebooted between each change, I then deleted the shared storage folders and converted the disk to RAID.

    At the time of writing this post I can tell you that the now 2tb IX4 unit is currently sitting at 27% complete on the reconstruction of the raid array, why?? well either Iomega are gimping the IX4 so that it can only use their drives (I have seen something similar with Lenovo Controller cards) or the particular version of the Samsung drive I was using (the HD154UI) just isn’t compatible when converting it to raid.

    Either way I can tell you that after trying for over a week to de-brick my 2TB IX4-200d (after upgrading the drives from 500gb to 1.5tb and then downgrading the disks back to the 500gb Seagate drives) the unit is restored back to it’s full (but size limited) glory.

    My final word on this at the moment is… if you need a larger Iomega IX4 don’t skimp, pay for the size that you will really need rather than attempting to upgrade the unit yourself.

    My final word now is that it is possble to upgrade the disks in the IX4, however be warned that whilst I have had some successes I have also obviously had some failures. In this instance the 1.5tb drives didn’t work but as you will find out later my WD20EARS (2tb drives) did work.

  • Goals for 2011

    Well folks, it’s nearly the end of 2010 and it’s been an interesting year.

    I have decided that next year I have a number of goals that I want to attain, these are.

    * Sit and pass my VCP vSphere 4 Exam
    * Improve my VMware product knowledge (VMware View in particular)
    * Produce more technical content for this site (as well as my-homelab.com)
    * Move more into the VMware Virtualisation platform market and less of the Microsoft one.
    * Become a recognised VMware blogger (I want my vExpert).
    * Continue to improve my home lab environment.

    Obviously to get some of those out of the way is going to require a lot of hardwork and dedication, hopefully I will be able to show everyone that I can do just that.

    Have a great New Year and see you on the flipside.

  • Netgear GS724Tv3

    I had been looking for a new switch to use in my home lab, I needed something that could handle jumbo frames as well as vlan’s, it also needed to be larger than 8 ports as I have a large number of machines that would be connected to it.

    I had initially looked at the Linksys (Cisco) SLM2008 with the idea that I would link them together but decided against that when I saw that there were some deals to be had with the Netgear GS724Tv3.

    After some searching online I managed to find one for £150 inc shipping so I snapped it up.

    Within 30 hours I had it delivered to me 😀

    This will be set up over the next couple of days and will replace my existing switch infrastructure (a Netgear 8 port Prosafe Un-Managed switch).

  • Mike Lavericks Swagbag Xmas Raffle has been drawn

    http://www.rtfm-ed.co.uk/2010/12/23/congratulations-celebrations-swagbag-competition-winners/

    Congratulations to all the winners.

  • Various VMware Posters

    I have been downloading and printing a lot of documentation recently in preperation for my VCP, three different downloads were of various posters, these included the vreference card for vSphere 4.1 from Forbes Guthrie’s site, another one was the PowerCLI reference from the VMware Communities PowerCLI site and finally there was the Dudley Smith‘s Connections and Ports poster .

    Having tried to print these out on A3 sized paper and not liking the results I decided to get them printed professionally, I am now the proud owner of three A1 sized glossy posters which will be put up on the wall of my office this weekend 😀

  • Lenovo TS200 Server Memory

    I was reading Simon ‘TechHead’ Seagraves blog a few months back, he was reviewing the Lenovo TS200, because of that review I decided to purchase 4 of them for my own virtualisation lab, at the time I also purchased 24gb of Lenovo ram for the servers, the idea being that I would put 24gb in each box. Not wanting to spend the huge sums of money for Lenovo memory I thought it would be a good idea to buy similar ram (after all, DDR3 ECC ram should be the same.. shouldn’t it?).

    Having scoured ebay for several months I found some DDR3 ECC ram that I purchased (expensive but a damn site cheaper than the Lenovo options which are approx £200 per 4gb stick :(), procrastinating a bit I didn’t build up any of my servers for months and it was only after talking with Mike Laverick that I discovered there was potentially an issue with the TS200’s not accepting all kinds of 4gb DIMMS. So during last week I started unboxing my servers and putting my new ram into them, unfortunately I discovered that what Mike said was true and I now have to sell my old useless (to me) Micron 4gb DDR3 dimms and try to save up some money for the very expensive Lenovo branded dimms (16gb per machine now instead of 24 due to the huge expense :().

    That will teach me to not properly research my servers correctly 🙁