Iomega… are you firmware locking your IX4’s? – No, they aren’t

7th January 2011 19 By Simon

Having decided that the 1.2tb of useable space (after applying iSCSI to it) I decided that as I had 4 1.5tb drives lying around I would see about upgrading the IX4 from 2tb to 6tb (I already have an 8tb unit used for file storage).

Now I am not the type of person who generally does something on the fly (especially where £400 of new hardware is concerned) so after a lot of searching the web for definitive answers on whether it was possible to upgrade the drives in the IX4, officially not but some have done so. Following the instructions I saw (and have actually used when upgrading my Buffalo TeraStation Pro II from 1tb to 4 and then 6tb) which instructed me to convert the NAS to JBOD (so that I didn’t have to wait after installing each drive for it to get the RAID set back in sync), once converted I was to replace one drive at a time and power it on, update the device via the web front end and then move on to the next disk.

All was good with the first three disks but when it came to the last disk the IX4 decided to give up the ghost, I now had 2 flashing lights on the front of the NAS and one of them was RED whilst the other was White. I also had a nasty looking graphic on the display. Now by now most people would have just replaced the original drives back into the unit but…. having been a bit of an idiot I hadn’t marked out which drives were which (later you will actually see this doesn’t matter but at the time I didn’t know that), not only that but I had also started formatting the drives and putting them into my HP Microserver and that left me up a certain creek without a paddle.

I then spent a week with various pieces of software (Acronis Home 2010 and 2011 (cheap upgrade)), Parted and PMagic) trying to see if I could clone one of the first disks in the drive (the composition of each drive is that it has two different partitions, the first partition is only 2gb (1.9 actually) in size, this is the one with the EMC Lifeline OS whilst the other partition takes up the rest of the space to give you your data partition. Unfortunately for me I was having difficulties with cloning the drive, either I didn’t have enough space (so I used the 4th disk and used that to store the image on for the other three drives) or it simply took too long (the second partition wanted a byte by byte copy and took over 8 hours to complete). Finally I completed the copy and cloned that back onto the previous three drives, put them back into the unit… and got the same graphic. Obviously something was very wrong.
My next step was quite drastic but as I had been trying now for 4 days to get my bricked unit up and running and I wasn’t getting any closer it was time for drastic measures. My 8tb unit has been in use for a couple of weeks now and is the primary data storage for all of my data (some 3.5tb worth), on my old network it had taken over a week to copy (no jumboframes enabled switch or NIC on the machine) and as I like to keep two copies of said data (the backup for the 8tb unit is my 6tb Terastation Pro II) I was a tad concerned that if I encountered a failure on the Buffalo I was in even more trouble (yes I know, a decent backup solution is called for.. but 3.5tb is a lot to backup and that’s why I run two NAS devices, I only ever write to one, that data is then replicated over to the second one ever 2 hours). With a heavy heart I removed all my precious shares and deleted the raid array. I was now left with 7.2tb on NAS. On powering down the drive I took it out and again using the previously mentioned tools tried to copy each of the 2tb drives so that I could then try and shrink them down onto 1.5tb drives. 8 hours later I had my cloned image with which I started to apply to the smaller 1.5tb drives, once both partitions were on the smaller disks (and having resized the data partitions to fit on the smaller drives) I then copied the 2gb partition from all four of the 2tb drives and applied them to each drive (1-1, 2-2 etc), once applied I installed the drives back into the IX4 chassis.

This is where the fun really started 🙁

Suprisingly enough the IX4 booted up, obviously it now thought it was my 8tb unit (thankfully that was currently powered off), amazingly it looked like I had de-bricked my IX4, I had a device that would power up and would also re-start when I changed the name and IP address of the unit (I wanted to get the 8tb unit up and the raid array back so I could replicate my data onto it). During the course of rebooting the 6tb unit I had kicked off the creation of the raid array on the 8tb unit, luckily for me the web front end was providing feedback on the fact that the array was “Data Protection is being reconstructed” and my Blue drive lights were going like the clappers. I then kicked off the same thing for the 6tb unit but for some reason I didn’t get the same behaviour, I was still getting the blue drive light flashing like mad but instead of the unit giving me an increase in % complete it was staying at 0 (I was at my desk for a couple of hours and by the time the 8tb unit had 7% done the 6tb was still at 0, however I decided to leave it overnight just in case.

Morning came and the 8tb unit was now at 52% complete whilst the 6tb unit was still showing 0%, not good :(, knowing that I would be out of the house for the next 12 hours I hoped that in that time everything would sort it self out…

Tonight I got home to find my 8tb unit sitting happily with a new raid 5 array on it whilst the 6tb unit was now sitting at a loading screen showing 95% loaded, it was unresponsive and I was now starting to get really peed off.

Destroying the raid 5 array (again) I inserted the 1.5tb disks back into the 8tb unit to get the OS back on them and tried again, I went through the process of renaming and changing the IP address and each time the unit powered on fine but EVERY time I tried converting the JBOD disks to RAID the unit would brick itself.

I was sure that something strange was going on, I know for a fact that those 1.5tb drives are good, they have been used successfully in the past in another NAS unit (I had owned 2 Terastations) and I knew from SMART that there was nothing wrong with any of the drives so being desperate I thought I would see what would happen if I put the original drives back into the IX4, so pulling the already blank 500gb drives from my HP Microserver I put three of them into the 6tb unit, the last one I placed back into the 8tb unit to get the OS installed on to it, I then took that disk and put it into slot 1 of the IX4 and powered it up, again I changed the name and IP address and rebooted between each change, I then deleted the shared storage folders and converted the disk to RAID.

At the time of writing this post I can tell you that the now 2tb IX4 unit is currently sitting at 27% complete on the reconstruction of the raid array, why?? well either Iomega are gimping the IX4 so that it can only use their drives (I have seen something similar with Lenovo Controller cards) or the particular version of the Samsung drive I was using (the HD154UI) just isn’t compatible when converting it to raid.

Either way I can tell you that after trying for over a week to de-brick my 2TB IX4-200d (after upgrading the drives from 500gb to 1.5tb and then downgrading the disks back to the 500gb Seagate drives) the unit is restored back to it’s full (but size limited) glory.

My final word on this at the moment is… if you need a larger Iomega IX4 don’t skimp, pay for the size that you will really need rather than attempting to upgrade the unit yourself.

My final word now is that it is possble to upgrade the disks in the IX4, however be warned that whilst I have had some successes I have also obviously had some failures. In this instance the 1.5tb drives didn’t work but as you will find out later my WD20EARS (2tb drives) did work.