So… which type of RAID array and RAID controller do you prefer?
My answer is -
HP - because if you have two identical HP machines, and the one gets damaged (except for its hard drives) you can take out the entire RAID array, plonk it into the other machine, and it will start up, no questions asked from the RAID controller. Windows may moan about new hardware and re-activation, but that’s it. (Not sure if other RAID controllers will allow this).
Intel RAID - it performs a regular patrol read, and I think this is a crucial point in preventing a rebuilding RAID-5 from borking itself. I’ve had several Intel servers with Intel RAID controllers, all of them have suffered a degraded RAID at one point in time, and yet I have had successful RAID rebuilds with RAID5 and RAID6 setups.
Of course, the drawbacks of having a branded RAID controller and associated setup is that you absolutely must find a compatible RAID controller should the original controller freak out (and hopefully not corrupt your data)…
Software RAID I’ve had very little experience with, but it do work great - but then you’ll need to be extremely sure you swap out the correct disk! Hardware RAID with a hotswap cage have the benefit of showing you which drive’s borked. (HP hard drives have LED indicators on them showing you which drive’s OK and which’s not).
So, over to you guys.
The one I don’t have to screw with.
The one you can hide beer in.
Whatever you use, I agree that hot-swap is critical.
On low end systems I stick with software RAID, because it’s usually good enough as long as either you document well, or have an OS that can identify which bay the disk is in (assuming the cage is correctly cabled of course). I run software RAID on my home server, with a hot-swap cage. I just made sure that I wired it up so that port 0 is the leftmost slot, and then they go up correctly.
For $brand servers I always end up with whatever they ship, which usually means a rebadged LSI or IBM RAID card. I’m personally quite happy with Areca, LSI or IBM - somewhat depending on what OS I’m using. The challenge at that point is usually in making sure that the cage supports disk failure indicators.
Then there’s the whole mess of fake-RAID cards to be carefully avoided once you work out which ones they are.
And another thing - the Intel RAID controllers have a very annoying buzzer which loudly complains as soon as something goes wrong, whether it be a disk failure or something, which is a good thing as it will let you know to check it out ASAP.
“Quiet” RAID controllers with no means of notification ~ whether by email, buzzer, screen popup ~ need to be discarded, as these will lead to data loss when you least expect it (or want it to).
And don’t blacklist the emails coming from said RAID controller… or have these go to the spam/junk mail folder…
If it has a piezoelectric speaker, you can make it a little quieter by putting a few layers of Scotch/celophane tape over it.
Audible alerts in a remote server room are rather pointless IMO. I’m probably going to have ear protection in place and even if I don’t value my hearing not everything in the room is “mine”. I’d much rather something with flashing lights that I can query over IPMI and with command line tools. The visual indicator is great for the times I’m walking the racks, the tools for the automated monitoring and alerting.
For home networks similar logic applies, I don’t want audible alerts because having that trigger at 02:00 will result in my sleeping in the dog house (possibly literally). A flashy light I may notice, but because I run automated monitoring (Xymon) I’ll at least get notified about failures (used to use email, now use PushBullet).