As I am getting ready to upgrade my hosting servers, I’ve been doing some research on how to get a mirrored RAID-1 array set back up.
Back when the Paris server was the main server and it was built, it was setup with two hard drives in a RAID-1 array using the BIOS RAID – which was on an nVidia RAID motherboard.
Well, this kind of RAID array is not a hardware RAID array – but it is a software RAID array. It does have the firmware in the BIOS that allows for a nVidia RAID Configuration Utility by pressing F10 after the BIOS screen is done. That is handy – and that is where you can setup your RAID array.
Of course, the drivers for this are made for Windows – and is called the nVidia MediaShield. You have to install these drivers to make use of the RAID array. This is because this type of nVidia RAID array built onto the motherboard requires both the software to be installed and the functions in the BIOS. Hence, most people call this a FakeRAID or SofRAID setup. Unless you shell out a large sum of money to get a real hardware RAID card with onboard processing and memory, you are stuck with this low-cost alternative.
FakeRAID setups require the use of the main processor on the motherboard to do most of it’s tasks. That is fine in today’s world of 3.0 GHZ dual-core and quad-core processors. That is also fine if you are running Windows with the nVidia Media Shield installed where you can manage your array and ensure it is in good health.
But, it is a little different in the Linux world. Originally when the below issues happened – it was when using Ubuntu 8.04 – which had the older version of “dmraid” (the 2006 version – you can check your version of dmraid by typing ‘dmraid –version’ at a terminal).
Well, I already have a full server install on my PC. All of the documentation on how to setup a FakeRAID array in Ubuntu is all for folks that want to install Ubuntu fresh – not after the fact like what I need done. I’m sure I’ll have another post on this issue soon once my new hardware arrives and I go through the process – as I have a fairly good idea how I can setup a FakeRAID on Ubuntu after installing.
So, I just wanted to see what ‘dmraid’ was all about so I did the usual ‘sudo apt-get install dmraid’ at the command prompt.
Very strange – after I installed it, I issued a ‘dmraid -r’ command it it showed that i had one drive in the mirror (the only drive in the system currently) and it was in a mirror mode. Hmm – I never setup a mirror on this drive – so how could this be possible? Well, it may be possible because when I setup the Decatur server, I took the mirrored drive from the Paris server to put in the Decatur server – therefore it stopped the RAID-1 array on the Paris server. Because of that, the RAID metadata was probably still on the hard drive that I put in the Decatur server.
Well, I didn’t think this would be a problem. I was cleaning the server and getting ready for the new RAID cage and hard drives to show up and shut the server down. Upon reboot, the server would not boot! It would go to the Grub menu and dmraid would show that there was an ERROR – degraded array. Then when Grub tried to continue the boot process, it would not boot. It said ‘device or resource busy’ after installing dmraid.
I tried to search all that I could on this issue but there just wasn’t a solution. In my Grub menu.lst file, the boot sequence is done by using the UUID of the hard drive. So the Grub menu.lst was not changed after installing dmraid – but the computer still would not boot.
I went into my BIOS and checked things. Sure enough, I had the SATA in Normal mode and not in RAID mode. I was confused.
So, it was decided to go ahead and change the mode to RAID mode and ensure that SATA-1 was enabled for use in RAID. Upon reboot, I got the nVidia RAID Configuration notice to press F10 along with blinking red letters indicating my RAID array was degraded. Hmm.. maybe I am on to something now.
In the nVidia RAID Configuration Utility, it showed that the drive was part of a mirror. Again – this may have been caused from this drive being used in the Paris server in a RAID array and it still had the RAID meta-data on the drive. So, I deleted the RAID array but left the MBR and data intact and rebooted.
Success! Now when Grub loaded, dmraid said ‘NO RAID DISKS’ and then the server continued to boot up!
So what happened here? It seems that dmraid will take over any disks and assign them to a /dev/mapper/……. location. So since dmraid was loaded before Grub started the full boot-up process, the hard drive was already in use – hence the ‘device or resource busy’ error message.
Today I have upgraded the Decatur server to Ubuntu Lucid 10.04 from 8.04. While there were a few things that needed fixed (like websites running Joomla 1.0.x versions because of incompatibilities with PHP 5.3) and other small configuration changes here and there, the upgrade process went smooth. And Lucid 10.04 includes the most updated version of dmraid – version 1.0.0.rc16 (2009.09.16). I’ve heard that the old version I was running had problems trying to rebuild RAID arrays using the nVidia FakeRAID and this issue is supposedly fixed in the new version – although I’ve not been able to confirm that from anyone.
While a hardware RAID solution is by far the best solution, I’m not going to throw $100+ into a RAID card on each server to do so. I’m just hoping for the best with using the FakeRAID in Ubuntu and hoping that all of these little glitches and problems are taken care of. The whole point of using a RAID-1 array is to have that backup mirror. But if you cannot rebuild that mirror after a drive fails, what is the point? I guess that will protect you from one hard drive failure.