For those of you interested in Freenas 8 it’s now been officially released and can be downloaded from here, expect testing results shortly.
Category: Storage
-
Home Lab NAS/SAN Shoot-Out Part 2
In the second part of my series of testing home lab NAS/SAN solutions I tested Open-E’s DSS v6.
I had been planning on testing the current release candidate of FreeNAS, RC5, unfortunately during initial testing this proved to be a little unreliable in it’s current state as you can’t mount iSCSI targets, even from the command line, NFS shares however do work ok. Until FreeNAS release a version that I am comfortable working with I am stopping testing with FreeNAS at this time.
-
Openfiler 2.99 RAID 1 Creation Fix
With the recent release of Openfiler 2.99 there has been an on-going issue with an inability to create certain software RAID arrays. This is present in the beta and final release of the software and was getting frustrating, however there is a fix.
If you are going to be installing Openfiler 2.99 and discover that you’re not able to create software RAID 1 arrays then the following will resolve that for you.
-
Home Lab NAS/SAN Shoot-Out Part 1
In this new NAS\SAN Shoot-Out series I will be testing available NAS\SAN solutions out there for those of you wanting a free\cheap home based Lab NAS\SAN solution for your vSphere labs.
I will be providing the testing results of the following software solutions:-
Iomega IX4-200D
Openfiler (2.99)
FreeNAS (7.2 and 8.0RC5)
NexentaStor (3.0.4)
OpenIndiana (Build 148 – using napp-it)
Oracle Solaris 11 Express (using napp-it)
Open-E DSS (6)
-
Sorry for the lack of recent updates
My hardware raid controller (Lenovo branded M1015) simply isn’t recognised by OF or Freenas 7.2 and FreeNas 8 (RC2 and RC3) has a penchant for just crashing during the installation.
I have been toying with NexentaStor but haven’t been overly impressed with my SSD failing on it (the SSD works fine in Windows but fails under Nexenta), I have noticed that there were others experiencing the same issues with the OCZ Onyx 32gb SSD so I tried using an external SATA card.. which of course wasn’t recognised by NexentaStor, Solaris or Open Indiana, so that blew out my chances of getting the L2ARC working on any of them.
-
IX4 Remote Access – Don’t use Jumbo Frames
Hi all, as some of you may know I recently experienced issues with my IX4 whilst using the paid for TZO subscription service for remote access, having diagnosed the fault down to one of two different avenues.
1, Jumbo Frames
2, Bonded NICs
I disabled both and over the last two weeks have re-enabled NIC teaming, I can now advise you that during that period I had one issue with the device losing connectivity (not the old message, something new this time that I am putting down to my changing things on my Netgear switch rather than the IX4), since the reboot (over a week ago now) I haven’t experienced any further issues with the performance or availability of the device.
-
Iomega IX4 v Openfiler Performance Testing
Running my own home based lab I had struggled to find out which storage solution was going to be the best for me, I had multiple choices with the types of storage I could use (I own the following storage enabled\capable hardware; Buffalo TeraStation Pro 2, Iomega IX4-200d 2TB and HP’s MicroServer running Openfiler 2.3).
Over the last couple of weeks I have been carrying out various tests to see which device I would be using as my NAS\SAN solution and which device would end up being the location to store my Veeam Backups on.
All three devices run software raid, although I am about to try and fit an IBM M1015 SAS\SATA Controller in to my HP MicroServer (with the Advanced Key to allow Raid 5 and 50)) so both the Iomega and HP were similar where raid types were concerned. The Terastation is an already operational device that has existing data on it and could only be tested using NFS, it’s never really been in contention where SAN\NAS devices for ESXi was concerned.
Where I wasn’t sure about was whether I would be better off using RAID’s 0, 5 or 10 (obviously I am aware of the resilience issues with RAID 0 but I do have to consider the performance capacity of it as I do want to run a small VMware View lab here as well), not only was there a decision on the RAID types but also should I go down the iSCSI or NFS route as well.
Having read a number of informative blog and forum posts I knew that to satisfy my own thirst for knowledge I was going to have to perform my own lab testing.
Lab Setup
OS TYPE: Windows XP SP3 VM on ESXi 4.1 using a 40gb thick provisioned disk
CPU Count \ Ram: 1 vCPU, 512MB ram
ESXi HOST: Lenovo TS200, 16GB RAM; 1x X3440 @ 2.5ghz (a single ESXi 4.1 host with a single running Iometer VM was used during testing).STORAGE TYPE
Iomega IX4-200d 2TB NAS, 4x 500gb, JBOD – iSCSI, JBOD – NFS, RAID 10 – iSCSI, RAID 10 –NFS, RAID 5 – iSCSI and finally RAID5 – NFS ** Software RAID only **
Buffalo TeraStation Pro 2, 4 x 1500gb, RAID 5 – NFS (this is an existing storage device with existing data on it so I could only test with NFS and the existing RAID set, the device isn’t iSCSI enabled).
HP MicroServer, 2gb ram, 4 x 1500gb + the original servers 1.6tb disk for the Openfiler OS install, RAID 5 – iSCSI, RAID5 – NFS, RAID 10 – iSCSI, RAID 10 –NFS, RAID 0 – iSCSI and finally RAID 0 – NFS.
Storage Hardware: Software based iSCSI and NFS.
Networking: NetGear TS724T 24 x 1 GB Ethernet switch
Iometer Test Script
To allow for consistent results throughout the testing, the following test criteria were followed:
1, One Windows XP SP3 with Iometer was used to monitor performance across the three platforms.
2, I utilised the Iometer script that can be found via the VMTN Storage Performance thread here, the test script was downloaded from here.
The Iometer script tests the following:-
TEST NAME: Max Throughput-100%Read
size,% of size,% reads,% random,delay,burst,align,reply
32768,100,100,0,0,1,0,0
TEST NAME: RealLife-60%Rand-65%Read
size,% of size,% reads,% random,delay,burst,align,reply
8192,100,65,60,0,1,0,0
TEST NAME: Max Throughput-50%Read
size,% of size,% reads,% random,delay,burst,align,reply
32768,100,50,0,0,1,0,0
TEST NAME: Random-8k-70%Read
size,% of size,% reads,% random,delay,burst,align,reply
8192,100,70,100,0,1,0,0
Two runs for each configuration were performed to consolidate results.
Lab Results
After a long week or so (not only did I have to test each device twice, I also had to move the VM between devices which also took up time) I have come up with the following results.
Iomega IX4-200D Results
Openfiler 2.3 Results
TeraStation Pro II Results
Conclusions
Having looked at the results the overall position is clear, the Iomega IX4-200D is now going to be my Veeam backup destination whilst my HP MicroServer is going to be my centralised storage host for ESXi, I now have to decide whether to go for the R0 or R10 iSCSI approach as they offer the best performance, at this stage I am tempted to go for the Raid 10 approach however because the disks in the server aren’t new. Over the next few months I will see how the reliability of the solution is and take it from there.
One thing I can add however is that over the next couple of days I will be attempting to fit my M1015 RAID controller in there and seeing how that performs, once fitted I will re do the Openfiler tests and post an update.
-
Iomega IX4 Remote Access – Update
An update to my previous post.
Having purchased the subscription for the TZO Remote Access for the NAS device I eventually had to open a support call with TZO because the SSL certificate still wasn’t downloaded to the device.
The only way that I actually managed to get the device to re-enable Remote Access was to completely wipe the unit, thereby returning it to factory default, that’s ok if it’s a new device and doesn’t contain data but when you have 4tb of data on the unit it starts making life a little difficult (luckily for me the IX4 is my backup unit, it stores replicated data from my old Terastation unit), with a little bit of worry I wiped the unit and started from scratch.
Having the all clear from TZO that the certificate had now been issued by the supplier I was advised to reboot the NAS unit, I did that and all of a sudden my URL was coming back a being secured (yay). I now just needed to import the data from my Terastation again… 🙁
Jump forward 3 weeks, in those three weeks I have had countless emails from my NAS telling me that it was having issues with the TZO client not being able to confirm the external IP address for the unit, it as concerning as I had already port forwarded 443 via my router to the NAS box.
A little further investigation and some minor changes appear to have resolved my issues (I used to get emails 5 or 6 times a day, so far in the last 14 hours I haven’t received any so fingers crossed). The cause? Well to try and speed up my data transfer across my switched network I had enabled jumbo frames as well as bonded the two interfaces together (jumbo frames enabled on the switch and NAS), disabling both appears to have fixed the issues, at the moment I want to leave it for a couple of days before I start trouble shooting to see whether it was the bonded or jumbo frames that did it but my gut is going with jumbo frames.
So far all is well and good (I can tell you that jumbo frames improved the data sync by about 2 days).
-
Iomega… are you firmware locking your IX4’s? – No, they aren’t
Having decided that the 1.2tb of useable space (after applying iSCSI to it) I decided that as I had 4 1.5tb drives lying around I would see about upgrading the IX4 from 2tb to 6tb (I already have an 8tb unit used for file storage).
Now I am not the type of person who generally does something on the fly (especially where £400 of new hardware is concerned) so after a lot of searching the web for definitive answers on whether it was possible to upgrade the drives in the IX4, officially not but some have done so. Following the instructions I saw (and have actually used when upgrading my Buffalo TeraStation Pro II from 1tb to 4 and then 6tb) which instructed me to convert the NAS to JBOD (so that I didn’t have to wait after installing each drive for it to get the RAID set back in sync), once converted I was to replace one drive at a time and power it on, update the device via the web front end and then move on to the next disk.
All was good with the first three disks but when it came to the last disk the IX4 decided to give up the ghost, I now had 2 flashing lights on the front of the NAS and one of them was RED whilst the other was White. I also had a nasty looking graphic on the display. Now by now most people would have just replaced the original drives back into the unit but…. having been a bit of an idiot I hadn’t marked out which drives were which (later you will actually see this doesn’t matter but at the time I didn’t know that), not only that but I had also started formatting the drives and putting them into my HP Microserver and that left me up a certain creek without a paddle.
I then spent a week with various pieces of software (Acronis Home 2010 and 2011 (cheap upgrade)), Parted and PMagic) trying to see if I could clone one of the first disks in the drive (the composition of each drive is that it has two different partitions, the first partition is only 2gb (1.9 actually) in size, this is the one with the EMC Lifeline OS whilst the other partition takes up the rest of the space to give you your data partition. Unfortunately for me I was having difficulties with cloning the drive, either I didn’t have enough space (so I used the 4th disk and used that to store the image on for the other three drives) or it simply took too long (the second partition wanted a byte by byte copy and took over 8 hours to complete). Finally I completed the copy and cloned that back onto the previous three drives, put them back into the unit… and got the same graphic. Obviously something was very wrong.
My next step was quite drastic but as I had been trying now for 4 days to get my bricked unit up and running and I wasn’t getting any closer it was time for drastic measures. My 8tb unit has been in use for a couple of weeks now and is the primary data storage for all of my data (some 3.5tb worth), on my old network it had taken over a week to copy (no jumboframes enabled switch or NIC on the machine) and as I like to keep two copies of said data (the backup for the 8tb unit is my 6tb Terastation Pro II) I was a tad concerned that if I encountered a failure on the Buffalo I was in even more trouble (yes I know, a decent backup solution is called for.. but 3.5tb is a lot to backup and that’s why I run two NAS devices, I only ever write to one, that data is then replicated over to the second one ever 2 hours). With a heavy heart I removed all my precious shares and deleted the raid array. I was now left with 7.2tb on NAS. On powering down the drive I took it out and again using the previously mentioned tools tried to copy each of the 2tb drives so that I could then try and shrink them down onto 1.5tb drives. 8 hours later I had my cloned image with which I started to apply to the smaller 1.5tb drives, once both partitions were on the smaller disks (and having resized the data partitions to fit on the smaller drives) I then copied the 2gb partition from all four of the 2tb drives and applied them to each drive (1-1, 2-2 etc), once applied I installed the drives back into the IX4 chassis.This is where the fun really started 🙁
Suprisingly enough the IX4 booted up, obviously it now thought it was my 8tb unit (thankfully that was currently powered off), amazingly it looked like I had de-bricked my IX4, I had a device that would power up and would also re-start when I changed the name and IP address of the unit (I wanted to get the 8tb unit up and the raid array back so I could replicate my data onto it). During the course of rebooting the 6tb unit I had kicked off the creation of the raid array on the 8tb unit, luckily for me the web front end was providing feedback on the fact that the array was “Data Protection is being reconstructed” and my Blue drive lights were going like the clappers. I then kicked off the same thing for the 6tb unit but for some reason I didn’t get the same behaviour, I was still getting the blue drive light flashing like mad but instead of the unit giving me an increase in % complete it was staying at 0 (I was at my desk for a couple of hours and by the time the 8tb unit had 7% done the 6tb was still at 0, however I decided to leave it overnight just in case.
Morning came and the 8tb unit was now at 52% complete whilst the 6tb unit was still showing 0%, not good :(, knowing that I would be out of the house for the next 12 hours I hoped that in that time everything would sort it self out…
Tonight I got home to find my 8tb unit sitting happily with a new raid 5 array on it whilst the 6tb unit was now sitting at a loading screen showing 95% loaded, it was unresponsive and I was now starting to get really peed off.
Destroying the raid 5 array (again) I inserted the 1.5tb disks back into the 8tb unit to get the OS back on them and tried again, I went through the process of renaming and changing the IP address and each time the unit powered on fine but EVERY time I tried converting the JBOD disks to RAID the unit would brick itself.
I was sure that something strange was going on, I know for a fact that those 1.5tb drives are good, they have been used successfully in the past in another NAS unit (I had owned 2 Terastations) and I knew from SMART that there was nothing wrong with any of the drives so being desperate I thought I would see what would happen if I put the original drives back into the IX4, so pulling the already blank 500gb drives from my HP Microserver I put three of them into the 6tb unit, the last one I placed back into the 8tb unit to get the OS installed on to it, I then took that disk and put it into slot 1 of the IX4 and powered it up, again I changed the name and IP address and rebooted between each change, I then deleted the shared storage folders and converted the disk to RAID.
At the time of writing this post I can tell you that the now 2tb IX4 unit is currently sitting at 27% complete on the reconstruction of the raid array, why?? well either Iomega are gimping the IX4 so that it can only use their drives (I have seen something similar with Lenovo Controller cards) or the particular version of the Samsung drive I was using (the HD154UI) just isn’t compatible when converting it to raid.
Either way I can tell you that after trying for over a week to de-brick my 2TB IX4-200d (after upgrading the drives from 500gb to 1.5tb and then downgrading the disks back to the 500gb Seagate drives) the unit is restored back to it’s full (but size limited) glory.
My final word on this at the moment is… if you need a larger Iomega IX4 don’t skimp, pay for the size that you will really need rather than attempting to upgrade the unit yourself.My final word now is that it is possble to upgrade the disks in the IX4, however be warned that whilst I have had some successes I have also obviously had some failures. In this instance the 1.5tb drives didn’t work but as you will find out later my WD20EARS (2tb drives) did work.


