Akom's Tech Ruminations

Various tech outbursts - code and solutions to practical problems
Linux

Software RAID in Ubuntu Karmic 9.10

Posted by Admin • Monday, February 15. 2010 • Category: Linux

I am writing this down because it was somewhat hard to figure out how much of the HOWTO's out there are out of date. This is not particularly difficult, but it's my first RAID setup and this blog is my notepad. I am setting up a RAID1 on a Dell Precision 490 with two brand new 500GB SATA drives.

First I tried using BIOS RAID. My system doesn't have a true RAID controller card and after some trial,error and googling I decided to forget it and go with an industrty standard (MD) Linux software RAID. I reset my drives to non-raid in the BIOS, popped in Ubuntu server x64 CD and went on ahead.

Installing

Once it got to the partitioning stage, I followed instructions Here. They are mostly valid for the 9.10. Specifically the "Partitioning the disk", "Configuring the RAID" and "Formatting" I followed pretty much to the letter. I then let the installer finish and, amazingly enough, the system came up cleanly. I skipped the remaining sections including "Boot Loader"

Once the system is up, md starts to sync the disks for the first time (and it will do this any time the disks fall out of sync). Running "cat /proc/mdstat" shows something like this:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md2 : active raid1 sda3[2] sdb3[1]
      431738752 blocks [2/1] [_U]
      [==>..................]  recovery = 11.1% (47991040/431738752) finish=77.0min speed=82970K/sec
      
md1 : active raid1 sdb2[1] sda2[0]
      7815552 blocks [2/2] [UU]
      
md0 : active raid1 sdb1[1]
      48829440 blocks [2/1] [_U]
      
unused devices: 



In my case, md0 is my /, md1 is my swap, and md2 is my /data partition (where I stuff large things like music or virtual machine image files) Even though md2 is completely empty, the software RAID still seems to need to sync the disks - I can only assume that it's comparing each byte :-)

Testing

First thing I needed to check was whether GRUB is installed on both drives. This is a subject of some past ubuntu bugs and problems. I pulled one drive at a time, the bootloader still came up, so the answer is yes - you no longer need to do anything special to get GRUB on all your RAID drives. At least in RAID1. That's the good news. Unfortunately, the system still asked me whether it should come up or drop to a maintenance shell, and timed out to the latter. The installer asked me whether the system should boot even if the RAID become degraded. I said YES. Clearly that choice didn't stick. So to make that change manually:

  1. Edit /etc/default/grub
  2. Set GRUB_CMDLINE_LINUX="bootdegraded=true"
  3. Run update-grub


Now you have a system that will proceed like nothing happened should a drive fail. That means that you'd better set up monitoring (Nagios?) of the drives yourself.

0 Trackbacks

  1. No Trackbacks

0 Comments

Display comments as (Linear | Threaded)
  1. No comments

Add Comment


You can use [geshi lang=lang_name [,ln={y|n}]][/geshi] tags to embed source code snippets.
Enclosing asterisks marks text as bold (*word*), underscore are made via _word_.
Standard emoticons like :-) and ;-) are converted to images.
Markdown format allowed


Submitted comments will be subject to moderation before being displayed.