As some of you may have read previously I had a number of issues when I tried previously to upgrade my 2TB IX4 to make it into a 6TB IX4, whilst the disks were read and configured correctly, no matter what I tried I couldn’t configure any Data Protection on them.
Over the weekend I had to remove a disk from one of my test systems due to an ongoing SMART issue with the drive, the drive itself is showing up OK with my BIOS but using a USB boot disk running SMART scanning software (Parted Magic) I discovered an issue with the Spin Up Time Attribute that was causing me issues with NexentaStor (it was dropping the drive which is what caused me to do some in depth diags).
This failure lead me down the path of replacing the disk, as luck would have it I have a total of 8 of these disks shared across two storage environments, the first being the test lab, the second being my Buffalo Terastation Pro II. Not wanting to decommission the TS Pro II until the data had been replicated to another device I decided to bite the bullet and see about upgrading the disks in the IX4 again, this time using the same 2TB Seagate Barracudas currently sitting in the 8TB unit.
A quick trip down to my local PC World (shocking but I wanted the disks the same day), I struggled to find Seagate Barracudas, I did however find some 2TB Western Digital EARS disks at a lower price than the Barracudas were listed at.
Going through the procedure of removing all the data from the IX4 I powered down the device and removed 3 of the old 500gb drives and replaced them with the new WD EARS drives, powered the unit back on and got the usual message about "Data is unavailable due to a failure" and then a further request about wanting to overwrite unused drives to add them back into the system, what this process actually does is partition the drives into two partitions, the first is the OS for the IX4 whilst the second is the data partition.
A quick check back after the drives have been added gives me the following.
Now a word of advice about the EARS disks, they use the new advanced format which has now the default disk format out, it should also be pointed out that there have been issues with older OS’s not being able to read the new disk format correctly correctly (older OS’s such as Windows XP, WHS and 2003 all have issues with the native format of these drives) Now depending on the manufacturer of the drive there are a number of fixes out there to get around this issue, you can either use a jumper to short out two pins (7 and 8 ) or use a bootable cd image and change it with the drive vendors software. In my case I wasn’t sure whether the EMC Lifeline OS was capable of supporting the new Advanced Format drives and the last thing I wanted was a unit that wasn’t functioning correctly. Luckily enough I had gotten past the first hurdle and that was that the drives were recognised by the IX4, now I needed to ensure that performance wasn’t going to take a hit with the new format (again something that has been hinted at by users).
Using my now trusty IOmeter VM I carried out some tests to see what kind of performance I was getting from the new drives compared to the old ones. As you will see in a moment I was a tad disappointed with my results.
Especially when compared to my other 8TB unit.
Obviously I had some performance issues here, now I should admit that this device is actually my backup device for my main NAS (I have replicated data across two NAS devices currently, the idea with the new disks in the IX4 is to replace my existing Buffalo Terastation so I can use the disks from there in my testlab) so theoretically I could just live with the performance hit, or I could see if I could squeeze some more out of these drives, of course I went for the second option.
As previously mentioned there are two fixes available to owners of these drives, unfortunately because these devices are going to be formated by the EMC Lifeline software I didn’t want to try the software approach so I went down the route of shorting out pins 7 and 8 using jumpers (no, not the wooly kind).
Removing 3 of the drives from the unit I placed the jumper between pin 7 and 8 and placed the drives back into the unit, I also removed the 4th drive and placed one of the original 500gb drives back in there to ensure that the drives didn’t get corrupted during the rebuild process (thereby potentially bricking the device again), luckily the unit span up, added the 3 drives which allowed me to power it down and replace the 4th drive. Powering the unit backup and allowing all 4 drives to be placed online I proceeded to re-create my RAID 5 Array. One thing to note, RAID creation is a timely process, creating the first RAID 5 array took me nearly 24 hours so I wasn’t expecting great things, however in about 14 hours I had a freshly created RAID 5 array waiting for testing.
Again powering up my trusty IOmeter VM I carried out further testing to see if there was any improvement when using the newly formatted drives I got the following results.
Here we can see that the latency issues experienced when the drives were first tested aren’t there and the results are on par with the results found from my Barracuda equipped unit.
What that means is that whilst the WD20EARS drives are recognised in both formats we can see that there are latency issues when using the new 4k format, if you’re not concerned about performance then you don’t need to worry about jumpers and can leave the drives as is, however you will be impacted if you’re trying to transfer large amounts of data onto the device and as such you may be better off using the older format.
Obviously there are issues when it comes to upgrading the IX4 to a larger capacity, my own experiences show this, what wasn’t obvious previously is that this is dependant on drives (not something I have had issues with in the past with my Buffalo Terastation Pro II which took everything thrown at it). My advice for those wanting to expand their IX4’s is to try and get the same disk type (in the case of my original 8TB unit you can see that they had the Barracudas fitted) if that’s not possible then try and get disks with the same characteristics (spin rate etc), the last thing you want and need in something like the IX4 is fast spinning disks because that equates to increased heat which could cause failures further down the line).
I can confirm that from a heat perspective that the WD20EARS offer no change in operating temps when compared to the Barracudas.
A quick run down of the process to upgrade the drives in your IX4.
First of all you need to ensure that the storage is empty and that all shares have been deleted, carrying this out on a unit that’s got shares\data involved hasn’t been tested by me mainly because I know of no way to expand the RAID partitions once created and this could prove to be an exercise in futility if you try to expand the device with data on it.
Once the unit has been purged of all data power the device down via the website interface, remove the case and remove three of the disks ensuring that one of the original disks is in place, so far in my testing I have always left disk 0 in place and I haven’t experienced any problems when it came to the disks that were removed\replaced because as long as the unit has a drive in it with the EMC Lifeline OS installed then you should to get that replicated to the new drives when they are brought online, once those three disks are online repeat the process to bring the final disk online.
Once the new disks have been brought online it’s time to create your drive protection, in the case of JBOD your device will be online in a few minutes, when it comes to RAID 5 allow for a day before the device is ready for use.
Final word, whilst this has worked for me I can’t be certain that the drives you use will be accepted by the IX4, one of the things that Iomega have never offered is the ability for the units to have their disk space increased by the owners, they do offer replacement disks however should one of yours fail.
Nice write up. As for me, I tried Nexenta core on my DIY NAS (don’t see my SATA drives), I tried ESS6 – which does not recognizes my LAN card (only when it runs in LIveCD mode… ).
If NexentaStor didn’t work for you then you may want have a look at OpenIndiana or Solaris Express and utilise Napp-IT as a front end for ZFS as they use newer versions of the ZFS software and Solaris OS software.
Is the LAN card an onboard? Also when did you download DSS? Only reason I ask is that there was a new version posted [2011-05-06] Open-E DSS V6 ver. 6.00 up75 build 5377 because that may contain newer drivers.
Also try posting over on the Open-E forums as they may have a fix for you.
how to replace one of the failed harddrive in ix4 (4TB=4x1TB) when it is configured for 2.72 TB
You have to remember that 4x1tb drives won’t give you 4tb of usable space in a raided environment. To replace the failed drive you should log onto the NAS device, Go to Settings and then you can check the Event Logs or click on Disks and see which of the drives has failed there. Take a note of the drive number (1 – 4), power down the NAS and open up the case. Once opened up find the drive number and remove it, replacing it with a new drive (in this instance I would suggest purchasing a new drive from Iomega if the existing drive isn’t under warranty). Once the drive is fitted power on the NAS and allow the system to rebuild the raid array.
That should be all you need to do.
Having problems with booting my StorCenter ix4-200d.
After doing a format on all 4 disks (1TB each) the system is not booting anymore.
The screen is displaying the network and the usb symbol.
Seems like the partition with the system on it is gone on all 4 disks.
Hopefully there is somewhere a download or image file available with the OS partition.
(or a disk taken out of an ix4-200d)
Please let me know if you know where to get the system image or partition.
Otherwise my StorCenter is bricked
Were these the original disks or upgraded drives? If you still have your original drives to hand then you should be able to restore the unit by replacing drive 0 with an original drive and powering the unit up. As I mentioned in my post I followed a 2 stage upgrade path by only upgrading 3 drives and then once they were online did I power off and replace the last drive.
Let me know how it goes.
One thing that I did is use the command DD from the IX4 to create a exact block copy of the first 2G partition on each drive. You can enable SSH from the http://deviceip/support.html
dd if=/dev/sda1 of=/mnt/soho_storage/samba/shares/USB_CBM_Flash_Disk_1boot-ix4.img where USB_xxxxx is a thumb drive.
I can not take any drive and attach it to a Linux desktop and dd the image back.
I am curious if anyone has a ix4-200d Cloud Edition that they would want to share their dd image of the boot partition ? I am not sure it will run on my box, but after these Seagate drives start dropping like flies and me loosing some data I am starting over with larger WD in a RAID 10.
Mark: Would you by any chance be willing to share that DD image you have from the 2G partition? My IX4-200d is down at the moment, all 4 disks are gone…
Any reason the image I have linked in my de-bricking post won’t work? obviously apart from it being an Acronis image it’s good to go.
This is a great article. I was having a problem upgrading my IX4-200D.
Does anyone know the largest size drive the IX4-200d supports? I see the newer cloud editions support up to 12tb total (3tbx4) so just wondering if anyone else has tried upgrading to anything larger then 2tbx4 on the standard IX4=200d.
Hi. I bought a second hand ix4-200d on ebay, and as I found out, without disks. Now I am trying to put some WD10EARS disks in it, but cant get it to work. I just hangs at USB+NETWORK screen (2 pictures). Can you help? Been banging my head whole day
Hate to dredge up an old post, but I upgraded my IX4-200D drives from 1TB drives to 2TB the other week, and since I’ve done so I’m having some weird behavior. Basically, although I’ve hardly stored anything on the unit at all…the unit continues to state that it is down to less than 5% free space. I’m thinking that expanding the drive somehow expanded the consumption in one of the other partitions that services the XFS array or something… Anybody have any similar experiences or thoughts? I haven’t noted any bad behavior…just the warning message thus far.
James, when you upgraded your disks did you start from scratch or just add the disks in one at a time (ie remove a disk and replace it then allowing the array to rebuild itself) or did you do follow my instructions?
Essentially when I upgraded my disks I copied the content of the 500gb drive (just the OS portion rather than the storage array portion) and let it create the storage itself, it used the entire disk up.
I did the one drive at a time routine. It all has seemed to work fine (took forever)..just the nuisance warning message about running out of space (which it most certainly is not). I did get mine upgraded to the recently released firmware, so now I’ve actually got Time Machine and AFP support that works again!
When I did the one drive at a time, it didn’t realize the additional space until the last drive had been joined to the array completely if I remember correctly. I don’t have much on there right now…maybe I should start over on it, but I hesitate to do so since the unit seems to be working just fine, save for the phantom error!
James, the only thing I can think of is that when you upgraded the disk you didn’t destroy the array first but instead upgraded an existing one with new disks.
I need to have a look at the new firmware as well as the firmware for the cloud edition (unfortunately Iomega require registration of a qualifying device before you can download it) to see what improvements it offers.
I would definitely suggest trying again but this time when taking an image of the first partition make sure that it doesn’t have any array configured, when I did mine the entire process probably took no more than 30 – 40 minutes at most.
I’ll have to try that. Just for what it’s worth…I did let the built in firmware re-establish the data protection by changing the RAID type a few time to see if that would solve the warning, but it did not. I don’t even get the warning all of the time…just from time to time and typically when I’m doing a lot of writes to the NAS, such as a full-on system backup for one of my home machines or something. I should really read more about the XFS backing this NAS, but I just haven’t had time lately to do so…
I’m using the Seagate Barracuda Green 2TB drives and as far as I know there is no way to disable the 4k sectors on these drives. Of course, Seagate states that their “smartalign” drive firmware handles everything for you, but I don’t have any performance data to support that claim since I don’t have any “before” drive swap performance numbers to compare to. So I’ll just have to live with it. I don’t think anyone’s ever confused the IX4-200D as being some barn burner NAS anyway 😉
What I found with my drives is that they had a jumper setting that I could use that turned it back to 512 bytes instead of 4k, I did notice a slight improvement when moving back to the older format.
According to Western Digital, the EARS drives are not meant for RAID-5, only RAID-0 and RAID-1. It looks like there is no issue in using these drives, correct? Also, what are your thoughts on replacing one dead Barracuda with a new EARS drive? From what I can gather, with the jumper modification, it should be no issue.
Hi Eric, I haven’t experienced any issues with the EARS drives, they have been in my second IX4 and working without issue for a while now, the second IX4 is actually a sync target of my first IX4 and get’s synced every 3 hours or so.
Well, it looks like the EARS drives are discontinued. I ended up using a WD10EURS 1GB drive for a simple 1-failed-drive swap. These are apparently the drives that are used in many DVR systems. With the 7/8 jumper in place, everything works great (so far). I still have 3 of the original seagates in place.
Simon–Thanks for all your comments. This website and your feedback were truly indispensable and reassuring!!!
…and down goes Seagate drive number 2… Just finished installing a second WD10EURS. I noticed from comparing the labels on the two drives that the Seagate’s pull less amperage. Any idea if this will become a problem as more are replaced?
What’s the draw? I only ask because I know that these units can power 3tb drives without issue.
I will check the draw on my old and new drives when I get back home.
The label on the original Seagate shows:
5VDC … .295A
12VDC … .219A
The label on the WD10EURS shows:
5VDC … .70A
12VDC … .55A
Not sure if this is a big deal or if the unit can support that draw. Interestingly enough, just before drive 2 went bad I got an error message that the unit was showing a voltage lower than the tolerance level. I don’t remember the specific numbers, but it only reported it once.
I have a Iomega StorCenter ix4-200d 8TB and the device boots up to 95% and then stops… will sit there for days… I have tried using the reset button on the back of the device but did nothing. Does anyone know how i can get my data off the ix4-200d device… are there any tools out there to get the data off or is there a way to reflash the Iomega StorCenter ix4-200d unit to see if i will beable to get my data off.. Please i am desperate.
Before trying any of the following suggestions I would definitely see about taking a backup of your existing drives (potentially difficult I know but I don’t want you losing any of your data).
I would suggest using something like Acronis to backup each drive and take note that the drives come out from the bottom up (so disk 0 is the bottom disk in the unit), if I were you I would get a marker pen and number each drive so you know the order to put them back in.
Once you have a back up (or in case you’re going to by pass this step) the next thing you want to do is run up a copy of Parted Magic (http://partedmagic.com/doku.php?id=start) to look at the structure of your drives, it could be that one of them has a physical fault on it and that’s why it’s failing to load up properly.
If after you have inspected all your drives and discovered that they are all ok then you may want to see about replacing only three of the drives back in the unit, this forces to load the unit up again but will complain about a failed drive, I am only suggesting a single drive at the moment because once you remove more than one from the boot process you’re going to lose your data (unless of course you’re only using raid 1 anyway?).
If none of the above steps work then the next thing to do is try and find a Linux OS that will be able to read the raid information, unfortunately as thats not really my strong point I can’t really help there too much.
The final option open is finding someone with an IX4 locally to you to see what happens if you put your disks into their unit, it ‘could’ be an issue with the IX4 but in all honesty I am not really expecting that to be the case.
The ix4-200d is a shit. I updated the firmware 188.8.131.5267 on different NAS 3TB and 6TB with faulty drives, and now, after rebuilding the disks are good?
now, I’ve upgraded system with 4 drives good!
Disc 3 fault what happened?
I sold 8 NAS ix4-200d, are all failures (iomega has replaced several times and NAS drives).
I admit that the IX4 isn’t the best device out there but (and unfortunately this is true) you get what you pay for, the IX4 is a cheap and cheerful NAS drive more aimed at the home user than the prosumerenterprise and as such doesn’t have the reliability that more expensive devices have.
I do admit that I have had 2 devices returned due to some issues but I use the devices constantly and to be fair they function as I expect.
How are you using your ix4-200d devices, do you actually run VM’s from them? If so how many concurrently? I was worried about performance so I have been running VM’s locally on my whitebox…
Hi Scott, I don’t use the ix4 to host VM’s, if you look at this post here you can see that I tested the ix4 against the Openfiler product running on a HP Microserver. The ix4 didn’t perform too well.
I use my ix4’s as a backup medium instead (which is fine) and rely on a dedicated NAS box (self built) running FreeNAS and ZFS to host my VM’s. Price and performance wise I found it easier to run with the FreeNAS solution.
I’ve got a serious problem that I hope someone can help me with.
My ix4 reported a drive failure. I had a drive of the same capacity, but not the exact same type, which I swapped in for the failed drive. It didn’t work, and reported that all 4 drives need overwrite confirmation. That is, it thinks all 4 drives are new. I swapped the faulty drive back in, but it is still saying that all drives are new and require overwrite confirmation.
I can’t lose the data on these drives – it has 10 years of photos on it.
I am fairly tech-savvy, so have installed PuTTY on my Windows PC to get SSH access, which works.
What I don’t know is what commands to use to either pull the files off the NAS onto a local drive, or to force the RAID reconstruction to go ahead regardless, and not overwrite the drives.
Any help gratefully received.
Ben, I haven;t used it myself but have a look at http://www.diydatarecovery.nl/forum/index.php?topic=1309.0, yes it’s a paid for piece of software but ask yourself, how much is my lost data worth to me??
Also have a look at http://forum.nas-central.org/viewtopic.php?f=251&t=2101 for possible linux solutions.
I was going to suggest having a look on Christopher Kuseks blog as well but just saw your posting there (Christopher is a tweet buddy of mine).
Simon, Thanks for the write up. It was your article that actually inspired me to proceed with upgrading my unit. Your article, coupled with the write ups on the NAS Central Forums (http://forum.nas-central.org/viewtopic.php?f=251&t=4401&sid=3893fb2ac3bd343916a8bbcce659d36a) were of great help. I pretty much followed your steps by way of removing all shares, pulling all drives except the 1st, etc. Like I mention below, all-in-all, its a pretty easy process, it just takes a lot of time and patients.
Below is my setup and the steps I followed. The firmware is what I had to start with and is what I’m currently running now that the upgrade is complete. I saw no reason to upgrade it, as the unit’s been working flawless for several years now.
Again, just wanted to say thanks and give back to the community my experience to share with others.
My Firmware Version = 184.108.40.20698 (I am now at 4 x 2TB on this ver)
Original drives = 4 x Seagate ST3500412ASCC32
New drives = 4 x WD20EZRX-00DC0B0 (Pins 5&6 and 7&8 Jumpered – 3 Gb/s Sata and 512k sectors, respectively)
These are the Advanced Format 6Gbs SATA III Green Drives.
Backup ALL data you want to keep.
Delete ALL shares from ix4-200d.
Quick erase all drives – This pretty much defaults the unit back into Raid 5 mode. At least that was my experience. My unit was originally configured and running in Raid 5 mode before I started this venture. I think I tried putting it into JBOD after removing all the shares and before doing the quick erase, and then experienced the unit defaulting back into Raid 5 mode once the quick erase was finished – your mileage may vary…
Removed original Seagate Drive 1 from the ix4-200d and took an image of the drive in separate PC for backup purposes. This took ~7hrs. to image the drive to a 2TB WD USB My Passport on a USB 2.0 port on the PC.
Put the original Seagate Drive 1 back into the ix4-200d and removed original Seagate drives 2,3, and 4, replacing them with 3 brand new WD20EZRX drives. This rebuilt the array (Raid 5 from earlier step) to ~5.4TB. This took nearly 22 hrs. to run.
Changed data protection to JBOD, shut down the ix4-200d and replaced the remaining original Seagate drive in position #1 with then last remaining WD20EZRX drive. The ix4-200d recognized this pretty quickly as 7.2TB of JBOD storage space once rebooted.
Changed data protection back to Raid5 and let it rebuild the array to approximately 5.4TB. This took nearly 24 hrs. to complete.
The WD20EZRX drives were chosen for a couple of reasons: 1.) They had the fastest sustained host to/from drive transfer rate (145 MB/s) I could find among Seagate and WD drives at the time that also incorporated 2.) a jumper-configurable 512k sector option (short pins 7&8) that matched the original Seagate drives without having to get messy with a software solution, and 3.) a jumper-configurable buffer-to-host option (short pins 5&6) for 3 Gb/s to match the original Seagate drives 4.) they were relatively easy to find. I did not look at any other drives than Seagate and WD. All-in-all, the upgrade went fairly smooth. What’s not mentioned throughout a lot of the threads I found is that once the unit gets defaulted after the drive erasure process, it will reset your IP settings. You can watch the display (if its still working, mine is starting to fade) or your DHCP server and figure out the IP to get back to the web interface. A lot of the drive “confirmation” messages you see on the display actually have to be performed via the web user interface, and they’re not exactly named to same. But with a little intuitiveness, you can figure out what to do.
Good luck with your upgrade.
Jim, I’m wondering what your experience has been using WE20EZRX drives in this scenerio. I’m about to install the same model in a raid setup, although not using an IX4 to do so. Did setting the 7-8 jumper put the drives into 512 byte block compatibility mode as it does with WD20EARS? Documentation from WD is very spotty in this regard. In my case, I’m planning to attach 3x WD20EZRX and 3x WD20EARS to a Dell Perc6 controller and run the whole lot as a 6 disk raid 5 array. The WD20EARS have been running for about 18 months in a RAID 5 array and I’ve had no issues with them. I’m hoping that the WD20EZRX drives will fare just as well.
Yes, the WD20EZRX drives are still running without a hitch in my IX4. The 7-8 jumper does indeed put the drives into the 512K compatibility mode. I will agree that the information on this is sparse and not very clear. The following documents refer to this jumper as the Advanced Format Jumper Setting, and needs to be set BEFORE you prepare your drive for use.
As I’m sure you’re aware, best practice is to use all the same make & model drives in an array. But I’m not going to tell you what you’re about to do won’t work. However, performance of the array could suffer due to differing characteristics and performance metrics between the different drives. Again, your mileage may vary. In looking at the 2 different models you’re about to use, you need to set the 5-6 jumper on the EZRX drives too. The EARS drives you have are only 3Gbs, as are the original drives and the controller in the IX4-200d
Good luck with your endeavor,
Hi there, I’m glad you were eventually able to upgrade your NAS :). I’ve had a Iomega StorCenter ix4-200d 8TB for almost two years now and today it froze, so we pulled the power plug and when it booted up, it proceeded with doing a data security check due to an improper shutdown and then the reg light started to flash, saying the second HDD had failed. My configuration is RAID5, so I have 5.5 TB of usable space out of a potential 8TB (unformatted capacity).
Can I use a 2TB WD Green drive to replace the problematic drive and will I need to backup all my data before I attempt a rebuild? I’m able to access my data just fine with no performance lag though so I’m assuming it just needs a rebuild once the hard drive is inserted.
Hi there mate. I have an ix4 200d with 4x1tb drives from factory in a raid0 (I’m after max capacity no redundancy). Can I purchase 4 x 2tb or even 4x3tb , rip out existing drives (losing all data is ok I have it replicated elsewhere) , put in new drives , boot it , detect new disks via web interface and set up new array based on the new disks ?
Bit confused about the need to keep existing drive ? Does that mean I should replace three of the disks , boot it, create a new array , then reboot it and put the fourth new drive in and once again create an array a second time ?
The way I would approach this would be to remove all the data before doing any of the work as well as removing the Raid 0 set from the drives leaving you with raw disks.
I would then power down the unit and remove 3 of the drives and place one of the new drives in the unit and power it back on, when you get to the console it should tell you that the drive needs to be initialised, do that and let it put the OS onto the drive (essentially the IX4 OS is striped across the drives but will work from a single drive).
Once the drive has built up I would power down the unit again and either remove the first drive or place a second new drive into the unit and go through the process again. Eventually you should have all 4 new drives in the unit with just the OS partition installed, at this point you can create your new raid set.
Initialising the raid set will take it’s time so be patient (days, rather than hours).
You don’t want to have a raid set on the disks when you’re replacing them because it will take weeks to install and initialise all the drives successfully.
As far as the spec of the drives are concerned, you ‘should’ be able to place larger drives in there but it was hit and miss for me whether drives would work. As it stands I would probably consider either the WD Red or Seagate NAS drives because they are used to NAS usage (although the IX4 isn’t aware of the drives themselves).
As an alternative I do have links to a couple of Acronis images that allow you to restore the OS partition to the new disk but I would probably go down that route only if you trash the system and need to recover.
To recap, destroy the array, power down, pull 3 disks, replace with one, power up and let it add the new disk to the system, rinse and repeat for the remaining drives. Create array, go on holiday and come back to put the data back on.
I’m going try and get some cheap ST32000542AS to make it an 8tb , doubling my current , seeing as those drives have been used in the ix4 previously in the 8tb variants
Wow so the OS of the ix4 actually puts some of its own stuff onto those disks and relies on it.
I presume that means if we had four disks fail it’d be bricked ? (Without a big pain in backside and having to get an image from sonmewhere of a working ix4 disc anyway).
Anybody tested to see if the new firmware that came out for the IX4-300d would work on the IX4-200d???
Knowing how flaky the IX4-200d is I wouldn’t attempt to do that (even putting the Cloud firmware on a non-Cloud device can brick it).
I’ve just had a drive fail on my ix4-200d and it’s out of warranty so it looks like I might as well upgrade while I’m replacing drives. Is there any reason to not just go to 4x3GB drives? Does it matter what the one drive you leave in at first is? Everyone seems to say leave the first drive in, but that’s the one that’s failed for me. I’m guessing that it doesn’t really matter which one it is as long as you put the new ones in one at a time?