Author: Simon

  • Iomega IX4 v Openfiler Performance Testing

    Running my own home based lab I had struggled to find out which storage solution was going to be the best for me, I had multiple choices with the types of storage I could use (I own the following storage enabled\capable hardware; Buffalo TeraStation Pro 2, Iomega IX4-200d 2TB and HP’s MicroServer running Openfiler 2.3).

    Over the last couple of weeks I have been carrying out various tests to see which device I would be using as my NAS\SAN solution and which device would end up being the location to store my Veeam Backups on.

    All three devices run software raid, although I am about to try and fit an IBM M1015 SAS\SATA Controller in to my HP MicroServer (with the Advanced Key to allow Raid 5 and 50)) so both the Iomega and HP were similar where raid types were concerned. The Terastation is an already operational device that has existing data on it and could only be tested using NFS, it’s never really been in contention where SAN\NAS devices for ESXi was concerned.

    Where I wasn’t sure about was whether I would be better off using RAID’s 0, 5 or 10 (obviously I am aware of the resilience issues with RAID 0 but I do have to consider the performance capacity of it as I do want to run a small VMware View lab here as well), not only was there a decision on the RAID types but also should I go down the iSCSI or NFS route as well.

    Having read a number of informative blog and forum posts I knew that to satisfy my own thirst for knowledge I was going to have to perform my own lab testing.

    Lab Setup

    OS TYPE: Windows XP SP3 VM on ESXi 4.1 using a 40gb thick provisioned disk
    CPU Count \ Ram: 1 vCPU, 512MB ram
    ESXi HOST: Lenovo TS200, 16GB RAM; 1x X3440 @ 2.5ghz  (a single ESXi 4.1 host with a single running Iometer VM was used during testing).

    STORAGE TYPE

    Iomega IX4-200d 2TB NAS, 4x 500gb,  JBOD – iSCSI, JBOD – NFS, RAID 10 – iSCSI, RAID 10 –NFS, RAID 5 – iSCSI and finally RAID5 – NFS ** Software RAID only **

    Buffalo TeraStation Pro 2, 4 x 1500gb, RAID 5 – NFS (this is an existing storage device with existing data on it so I could only test with NFS and the existing RAID set, the device isn’t iSCSI enabled).

    HP MicroServer, 2gb ram, 4 x 1500gb + the original servers 1.6tb disk for the Openfiler OS install, RAID 5 – iSCSI, RAID5 – NFS, RAID 10 – iSCSI, RAID 10 –NFS, RAID 0 – iSCSI and finally RAID 0 – NFS.

    Storage Hardware: Software based iSCSI and NFS.

    Networking: NetGear TS724T 24 x 1 GB Ethernet switch

    Iometer Test Script

    To allow for consistent results throughout the testing, the following test criteria were followed:

    1, One Windows XP SP3 with Iometer was used to monitor performance across the three platforms.

    2, I utilised the Iometer script that can be found via the VMTN Storage Performance thread here, the test script was downloaded from here.

    The Iometer script tests the following:-

    TEST NAME: Max Throughput-100%Read

    size,% of size,% reads,% random,delay,burst,align,reply

    32768,100,100,0,0,1,0,0

    TEST NAME: RealLife-60%Rand-65%Read

    size,% of size,% reads,% random,delay,burst,align,reply

    8192,100,65,60,0,1,0,0

    TEST NAME: Max Throughput-50%Read

    size,% of size,% reads,% random,delay,burst,align,reply

    32768,100,50,0,0,1,0,0

    TEST NAME: Random-8k-70%Read

    size,% of size,% reads,% random,delay,burst,align,reply

    8192,100,70,100,0,1,0,0

    Two runs for each configuration were performed to consolidate results.

    Lab Results

    After a long week or so (not only did I have to test each device twice, I also had to move the VM between devices which also took up time) I have come up with the following results.

    Iomega IX4-200D Results

    Openfiler 2.3 Results

    TeraStation Pro II Results

    Conclusions

    Having looked at the results the overall position is clear, the Iomega IX4-200D is now going to be my Veeam backup destination whilst my HP MicroServer is going to be my centralised storage host for ESXi, I now have to decide whether to go for the R0 or R10 iSCSI approach as they offer the best performance, at this stage I am tempted to go for the Raid 10 approach however because the disks in the server aren’t new. Over the next few months I will see how the reliability of the solution is and take it from there.

    One thing I can add however is that over the next couple of days I will be attempting to fit my M1015 RAID controller in there and seeing how that performs, once fitted I will re do the Openfiler tests and post an update.

  • Iomega IX4 Remote Access – Update

    An update to my previous post.

    Having purchased the subscription for the TZO Remote Access for the NAS device I eventually had to open a support call with TZO because the SSL certificate still wasn’t downloaded to the device.

    The only way that I actually managed to get the device to re-enable Remote Access was to completely wipe the unit, thereby returning it to factory default, that’s ok if it’s a new device and doesn’t contain data but when you have 4tb of data on the unit it starts making life a little difficult (luckily for me the IX4 is my backup unit, it stores replicated data from my old Terastation unit), with a little bit of worry I wiped the unit and started from scratch.

    Having the all clear from TZO that the certificate had now been issued by the supplier I was advised to reboot the NAS unit, I did that and all of a sudden my URL was coming back a being secured (yay). I now just needed to import the data from my Terastation again… 🙁

    Jump forward 3 weeks, in those three weeks I have had countless emails from my NAS telling me that it was having issues with the TZO client not being able to confirm the external IP address for the unit, it as concerning as I had already port forwarded 443 via my router to the NAS box.

    A little further investigation and some minor changes appear to have resolved my issues (I used to get emails 5 or 6 times a day, so far in the last 14 hours I haven’t received any so fingers crossed). The cause? Well to try and speed up my data transfer across my switched network I had enabled jumbo frames as well as bonded the two interfaces together (jumbo frames enabled on the switch and NAS), disabling both appears to have fixed the issues, at the moment I want to leave it for a couple of days before I start trouble shooting to see whether it was the bonded or jumbo frames that did it but my gut is going with jumbo frames.

    So far all is well and good (I can tell you that jumbo frames improved the data sync by about 2 days).

  • Iomega IX4 Remote Access…. or not!!

    The Iomega IX4 is a handy little unit that comes bundled with a whole load of loveliness, well actually it would do if the loveliness actually worked correctly.

    Over the weekend I purchased a subscription from TZO to allow for remote access to my IX4 8TB unit. Included in that purchase was a 1 year SSL subsciption for the domain name My-HomeNas.com, unfortunately said SSL certificate has never downloaded to the NAS. I am now in the position where not only does the NAS throw up errors all the time but I also can’t actually re-enable Remote Access on it.

    Everytime I try to enable Remote Access using the Paid Subscription (Already Own) settings the NAS will just sit there with a processing screen. A support desk entry over at TZO resulted in a reply and the ticket being closed (even tho the email asked me to reply to them when the NAS had been rebooted) and it left me with the task of wiping the NAS again to see if that allowed me to configure Remote Access again.

    I am currently left with a NAS device that’s trying to rebuild the Disk Structure (clearing the config on the unit) but unfortunately that appears as if it’s going to take days to complete.

    Overall I am not that impressed with the Iomega NAS line, so far I had my first unit turn up DOA, the second one wouldn’t let me upgrade the disks (I know, it’s not factory supported anyway) and now the 8tb unit won’t actually let me enable Remote Access successfully. The only time it did work over the weekend (with the certificate error) the device kept stalling when trying to remotely access a 50mb file (whereas using the same net connection on a WHS server allowed the file to be retrieved very quickly (I love my 50\5mb Up\Down speeds :D)).

  • TS200 Server – Kernel Debug errors 🙁

    Well having a single Win 2008 R2 server running Hyper-V is proving to be a bit of a problem for me.

    Over the last couple of weeks I have installed my virtualised environment onto three of my four TS200 servers. I am running 2 ESXi boxes with the third machine running Win 2008 R2 with my DC running on the ESXi cluster and my vCenter server running on my Hyper-V machine.

    All was good for the first couple of days, however over the weekend I started experiencing issues with my vSphere client dropping connection to the VC, upon closer examination I have discovered that my Hyper-V box is experiencing Kernel Debugs and powercycling, it’s doing this every 24 – 36 hours and I haven’t managed to figure out why (that may be a lie as I have seen some disk errors in the event logs that may be the cause but a little investigation so far shows no problems with the disks).

    I may well have to trash the Hyper-V side of the server and move everything over to vSphere (which I know I should have done in the first place).

    First things first however, I will be moving the vCenter server off my RAID 5 array (hardware raid using the Lenovo\LSI Raid Controller with the Hardware Feature Key) and see if it’s a problem with the array (will remove the controller from the server in it’s entirety) and power up the server for a couple of days.

  • Openfiler (running on HP Microserver) or IX4 as my iSCSI Share?

    So I have to make a decision, I am unsure on the speed of the IX4, add to that I am only using the 2tb version for iSCSI.

    My Microserver has been running Openfiler for the last couple of weeks (nothing configured, just installed), like the IX4 it’s running software raid (I have 5 SATA drives in the Microserver, 4 of them are soft raided to RAID 5 (4x1500gb)).

    I am considering using the IX4 as my backup target for Veeam Backup instead.

    Peoples thoughts??

  • My HomeLab – Setup Part 1

    It’s begun, over the weekend I started to put together my-homelab.

    In order to build up the environment detailed on my Home Labs page, I used 3 of my Lenovo TS200 servers, 2 of them with e3440 Xeon processors (ESXi) and 1 with the e3460 Xeon processor (Hyper-V). These are all quad core, eight threaded, single processors tower servers in which I’ve installed 16gb ram for the ESXi and 8gb ram for Hyper-V. Only the Hyper-V server has disks installed (4 x 750GB SATA drives in a hardware RAID5 setup (courtesy of the M1015 and Adv. Feature key) and an additional 250gb OS disk. The remaining 2 TS200’s are utilising the internal USB slot for ESXi.

    Building the Environment

    Over the weekend I finally had everything I needed to put my environment together, I wired up, plugged in and powered up a total of 9 devices that will be used in my home lab.

    3 Lenovo TS200 Servers
    1 Iomega IX4-200d 2TB NAS
    1 HP 8 Port KVM
    1 Netgear GS724T Switch
    1 HP 19in Monitor
    1 Management PC (QX9650 based gaming rig that’s been retired for 6 months)
    1 HP MicroServer

    Using instructions found in the following article “Installing ESXi 4.1 to USB Flash Drive” I pre-provisioned my 2gb Cruizer Blade USB keys with ESXi and installed them straight into the server (you have to love VMware Player )

    An additional step in configuring the environment up was to ensure that IP addressing was logical, because I will be using the entire server infrastructure on my home network I needed to ensure that I didn’t run out of network addresses (or more importantly use DHCP addresses in the server pool).

    I have configured the network up as follows.

    192.168.x.2 – 192.168.x.99 – Server and Networking Infrastructure
    192.168.x.100 – 192.168.x.150 – Workstations (DHCP)
    172.16.x.10 – 192.168.x.20 – iSCSI Traffic (and management PC)
    10.x.x.10 – 10.x.x.20 – vMotion Traffic

    The 172.16.x.x and 10.x.x.x networks are going to be vlan’d up to isolate the traffic from the rest of the network.

    Building the Storage

    Due to the my failure of increasing the disk capacity on my Iomega IX4-200d unit I have had to throw in an additional storage device, I have changed the role that my MicroServer was going to be doing (it was going to be a backup server utilising Microsoft DPM server). With that in mind I have installed OpenFiler on to the MicroServer, a nice and easy installation compared to NexentaStor (which failed to install due to my lack of CD drive as I am using the 5th SATA port as another drive).

    Both NAS devices will be configured for iSCSI services and will be used to connect both ESXi servers.

    I’ll point to the excellent post on TechHeads site on configuring OpenFiler for use with vSphere.

    The specifics for the lab is that both the OpenFiler and IX4-200d devices will be connected to the storage LAN (172.16.x.x) and not the main VM LAN. The IX4 device will be used for the VM’s whilst the OpenFiler storage will be used for the VM backups that I’ll be doing later.

    The Active Directory Domain Controller will also install directly onto the IX4 whilst the vCenter server will be installed onto the Hyper-V server (utilising DAS storage).

    Installing the Active Directory Domain Controller

    VMware’s vCenter Server requires Windows Active Directory as a means of authentication, that means that we need to have a domain controller for the lab. Steering clear of best practice (which requires at least two domain controllers for resilience) means that I am going to just install one for the moment. I sized the DC VM to be fairly small: 1 vCPU with 512MB RAM, 40GB hard disk (thin provisioned) and 1 vNIC connecting to the lab LAN.

    The setup was a standard Windows Server 2008 R2 install, followed by Windows Updates before running dcpromo.

    Host “WIN-DC01”
    Domain “MY-HOME.LAB”.

    Next up is the vCenter server. We’ll continue with that journey soon.

  • Iomega… are you firmware locking your IX4’s? – No, they aren’t

    Having decided that the 1.2tb of useable space (after applying iSCSI to it) I decided that as I had 4 1.5tb drives lying around I would see about upgrading the IX4 from 2tb to 6tb (I already have an 8tb unit used for file storage).

    Now I am not the type of person who generally does something on the fly (especially where £400 of new hardware is concerned) so after a lot of searching the web for definitive answers on whether it was possible to upgrade the drives in the IX4, officially not but some have done so. Following the instructions I saw (and have actually used when upgrading my Buffalo TeraStation Pro II from 1tb to 4 and then 6tb) which instructed me to convert the NAS to JBOD (so that I didn’t have to wait after installing each drive for it to get the RAID set back in sync), once converted I was to replace one drive at a time and power it on, update the device via the web front end and then move on to the next disk.

    All was good with the first three disks but when it came to the last disk the IX4 decided to give up the ghost, I now had 2 flashing lights on the front of the NAS and one of them was RED whilst the other was White. I also had a nasty looking graphic on the display. Now by now most people would have just replaced the original drives back into the unit but…. having been a bit of an idiot I hadn’t marked out which drives were which (later you will actually see this doesn’t matter but at the time I didn’t know that), not only that but I had also started formatting the drives and putting them into my HP Microserver and that left me up a certain creek without a paddle.

    I then spent a week with various pieces of software (Acronis Home 2010 and 2011 (cheap upgrade)), Parted and PMagic) trying to see if I could clone one of the first disks in the drive (the composition of each drive is that it has two different partitions, the first partition is only 2gb (1.9 actually) in size, this is the one with the EMC Lifeline OS whilst the other partition takes up the rest of the space to give you your data partition. Unfortunately for me I was having difficulties with cloning the drive, either I didn’t have enough space (so I used the 4th disk and used that to store the image on for the other three drives) or it simply took too long (the second partition wanted a byte by byte copy and took over 8 hours to complete). Finally I completed the copy and cloned that back onto the previous three drives, put them back into the unit… and got the same graphic. Obviously something was very wrong.
    My next step was quite drastic but as I had been trying now for 4 days to get my bricked unit up and running and I wasn’t getting any closer it was time for drastic measures. My 8tb unit has been in use for a couple of weeks now and is the primary data storage for all of my data (some 3.5tb worth), on my old network it had taken over a week to copy (no jumboframes enabled switch or NIC on the machine) and as I like to keep two copies of said data (the backup for the 8tb unit is my 6tb Terastation Pro II) I was a tad concerned that if I encountered a failure on the Buffalo I was in even more trouble (yes I know, a decent backup solution is called for.. but 3.5tb is a lot to backup and that’s why I run two NAS devices, I only ever write to one, that data is then replicated over to the second one ever 2 hours). With a heavy heart I removed all my precious shares and deleted the raid array. I was now left with 7.2tb on NAS. On powering down the drive I took it out and again using the previously mentioned tools tried to copy each of the 2tb drives so that I could then try and shrink them down onto 1.5tb drives. 8 hours later I had my cloned image with which I started to apply to the smaller 1.5tb drives, once both partitions were on the smaller disks (and having resized the data partitions to fit on the smaller drives) I then copied the 2gb partition from all four of the 2tb drives and applied them to each drive (1-1, 2-2 etc), once applied I installed the drives back into the IX4 chassis.

    This is where the fun really started 🙁

    Suprisingly enough the IX4 booted up, obviously it now thought it was my 8tb unit (thankfully that was currently powered off), amazingly it looked like I had de-bricked my IX4, I had a device that would power up and would also re-start when I changed the name and IP address of the unit (I wanted to get the 8tb unit up and the raid array back so I could replicate my data onto it). During the course of rebooting the 6tb unit I had kicked off the creation of the raid array on the 8tb unit, luckily for me the web front end was providing feedback on the fact that the array was “Data Protection is being reconstructed” and my Blue drive lights were going like the clappers. I then kicked off the same thing for the 6tb unit but for some reason I didn’t get the same behaviour, I was still getting the blue drive light flashing like mad but instead of the unit giving me an increase in % complete it was staying at 0 (I was at my desk for a couple of hours and by the time the 8tb unit had 7% done the 6tb was still at 0, however I decided to leave it overnight just in case.

    Morning came and the 8tb unit was now at 52% complete whilst the 6tb unit was still showing 0%, not good :(, knowing that I would be out of the house for the next 12 hours I hoped that in that time everything would sort it self out…

    Tonight I got home to find my 8tb unit sitting happily with a new raid 5 array on it whilst the 6tb unit was now sitting at a loading screen showing 95% loaded, it was unresponsive and I was now starting to get really peed off.

    Destroying the raid 5 array (again) I inserted the 1.5tb disks back into the 8tb unit to get the OS back on them and tried again, I went through the process of renaming and changing the IP address and each time the unit powered on fine but EVERY time I tried converting the JBOD disks to RAID the unit would brick itself.

    I was sure that something strange was going on, I know for a fact that those 1.5tb drives are good, they have been used successfully in the past in another NAS unit (I had owned 2 Terastations) and I knew from SMART that there was nothing wrong with any of the drives so being desperate I thought I would see what would happen if I put the original drives back into the IX4, so pulling the already blank 500gb drives from my HP Microserver I put three of them into the 6tb unit, the last one I placed back into the 8tb unit to get the OS installed on to it, I then took that disk and put it into slot 1 of the IX4 and powered it up, again I changed the name and IP address and rebooted between each change, I then deleted the shared storage folders and converted the disk to RAID.

    At the time of writing this post I can tell you that the now 2tb IX4 unit is currently sitting at 27% complete on the reconstruction of the raid array, why?? well either Iomega are gimping the IX4 so that it can only use their drives (I have seen something similar with Lenovo Controller cards) or the particular version of the Samsung drive I was using (the HD154UI) just isn’t compatible when converting it to raid.

    Either way I can tell you that after trying for over a week to de-brick my 2TB IX4-200d (after upgrading the drives from 500gb to 1.5tb and then downgrading the disks back to the 500gb Seagate drives) the unit is restored back to it’s full (but size limited) glory.

    My final word on this at the moment is… if you need a larger Iomega IX4 don’t skimp, pay for the size that you will really need rather than attempting to upgrade the unit yourself.

    My final word now is that it is possble to upgrade the disks in the IX4, however be warned that whilst I have had some successes I have also obviously had some failures. In this instance the 1.5tb drives didn’t work but as you will find out later my WD20EARS (2tb drives) did work.

  • Goals for 2011

    Well folks, it’s nearly the end of 2010 and it’s been an interesting year.

    I have decided that next year I have a number of goals that I want to attain, these are.

    * Sit and pass my VCP vSphere 4 Exam
    * Improve my VMware product knowledge (VMware View in particular)
    * Produce more technical content for this site (as well as my-homelab.com)
    * Move more into the VMware Virtualisation platform market and less of the Microsoft one.
    * Become a recognised VMware blogger (I want my vExpert).
    * Continue to improve my home lab environment.

    Obviously to get some of those out of the way is going to require a lot of hardwork and dedication, hopefully I will be able to show everyone that I can do just that.

    Have a great New Year and see you on the flipside.

  • Netgear GS724Tv3

    I had been looking for a new switch to use in my home lab, I needed something that could handle jumbo frames as well as vlan’s, it also needed to be larger than 8 ports as I have a large number of machines that would be connected to it.

    I had initially looked at the Linksys (Cisco) SLM2008 with the idea that I would link them together but decided against that when I saw that there were some deals to be had with the Netgear GS724Tv3.

    After some searching online I managed to find one for £150 inc shipping so I snapped it up.

    Within 30 hours I had it delivered to me 😀

    This will be set up over the next couple of days and will replace my existing switch infrastructure (a Netgear 8 port Prosafe Un-Managed switch).

  • Mike Lavericks Swagbag Xmas Raffle has been drawn

    http://www.rtfm-ed.co.uk/2010/12/23/congratulations-celebrations-swagbag-competition-winners/

    Congratulations to all the winners.