Installing Ubuntu 14.04 on RAID 1 and LVM

So you want an Ubuntu server machine running with RAID1 for reliability and LVM for flexibility.

TIP: Play around with this on a virtual machine. 
     It only takes 10 minutes for a new install.

I used a virtualbox guest. I created a test machine: 2CPUs, 2048MB RAM, 2 x 25Gb Hard disk.

Overview

The System disks will be split into three primary partitions.

  • boot
    • about 750Mb
    • mount point /boot
    • File system EXT4
    • Bootable
    • /dev/sda1 + /dev/sdb1 = /dev/md0
  • root
    • about 24Gb
    • LVM volume group vg0
    • LVM logical volume lv_root
    • mount point / (root)
    • File system EXT4
    • /dev/sda2 + /dev/sdb2 = /dev/md1
  • swap
    • about 2Gb
    • File system Swap
    • /dev/sda3 + /dev/sdb3 = /dev/md2

    Installation

    Starting with the DVD/CD for Ubuntu Server, install as usual until the disk partition part. I am assuming you can do that. If not, go and learn the basics first!

    So you have chosen the language, territory, and set the keyboard layout, etc.

    Partitioning the Disks

    After a short wait you will get to the Partition Disks menu select Manual.

    raid1_01

    Add a Disk Partition Table

    Select the first hard disk. As it it brand new we need to add a partition table to it.

    raid1_02

    Press return and the on the next menu select Yes and press return.

    raid1_03

    We have a partition table on the first disk.

    raid1_04

    Repeat those steps to create the partition table for the second disk drive.

    Create the Partitions

    raid1_05

    We need to start creating the partitions, Highlight the FREE SPACE on the first disk and press return.

    raid1_06

    Create a new partition for /boot, I used the odd 0.8GB for this.

    raid1_07

    I only use Primary partitions here but you can use primary or Logical/Extended, it makes little difference for this example.

    raid1_08

    Put is at the beginning of the free disk space or not your choice.

    raid1_09

    Prepare for RAID

    Highlight and then press return on the use as option selecting physical volume for RAID and set the Bootable flag: on. It should look as mine did below. Select Done setting up the partition to return back to the Partition Disks main menu.

    raid1_10

    Use a similar process for root partition. Select the remaining free space on disk one. Create a partition this time I used 24GB, Primary, and Beginning. set Use as to be physical volume for RAID.

    raid1_11

    raid1_12

    For the swap partition use all the remaining disk space about 2GB. again set use as to be physical volume for RAID.

    raid1_13

    Repeat the above making an exact copy for the second drive, now you know why I choose the values I did :-).

    The disk should look similar to the following

    raid1_14

    Create the MD Devices

    Now to setup our three RAID1 arrays, /dev/md[012]. Highlight Configure RAID and press return. Write the change to the storage devices.

    To create our RAID1 MD devices, select partitions in pairs sda1 & sdb1, sda2 &sdb2 etc. Each MD device will have 2 disks and zero spares. The screen shot shows the disks for the boot partition. Repeat for the other two devices and then select finish.

    raid1_15

    Back at the main menu again and it should look similar to the following:

    raid1_16

    Add LVM to the root Partition

    Configure the Logical Volume Manager, again write the changes to the disk to continue. Create a volume group.

    raid1_19

    raid1_20

    I called mine, vg0. But you can call it anything.

    raid1_21

    Select the the /dev/md1 this is our root partition which will use LVM so we can snap shot it for backups.

    raid1_22

    Create a Logical Volume, on the Volume group vg0.

    raid1_23

    raid1_24

    raid1_25

    We need to leave some space for the snapshot, so I choose to use 20978Mb. I changed the second digit from a 3 to a zero. 🙂

    raid1_26

    raid1_27

    Finish and go back to the main menu again.

    Assign the Filesystems

    Highlight the line under LVM VG vg0, LV lv_root to configure our root partition file system. Set it up as an EXT4 file system and a mount point of / (root). When it is all done it should look a bit like this

    raid1_28

    The last two are a lot simpler 🙂 yay!

    Highlight the line under RAID device #0. This is the unused space on the device. Set it up as an EXT4 file system and a mount point of /boot.

    raid1_17

    Now setup the swap partition on RAID device #2, it’s a similar process to the /boot partition.

    raid1_18

    Overall it should look like this

    raid1_29

    Sometimes when setting up the LVM you have to redo the /boot and swap partition if you configured those first, it’s rather annoying.

    Finish with the partitioning and save the changes to disk.

    Now complete the install as you would any other. I only select open ssh server from the package manager it makes the initial install much quicker. (If you break it you don’t have to wait so long for it to reinstall.)

    Setup Monitoring and Alerts

    If you run the command below, it will configure some default behaviour for monitoring and checking.

    sudo dpkg-reconfigure mdadm
    

    Manual Checking

    To see it is all running okay, there are some commands you can use.

    Use the df command to see the mounted filesystems and what devices they are.

    df -h
    
    Filesystem               Size  Used Avail Use% Mounted on
    /dev/mapper/vg0-lv_root   20G  1.1G   18G   7% /
    none                     4.0K     0  4.0K   0% /sys/fs/cgroup
    udev                     990M  4.0K  990M   1% /dev
    tmpfs                    201M  492K  200M   1% /run
    none                     5.0M     0  5.0M   0% /run/lock
    none                    1001M     0 1001M   0% /run/shm
    none                     100M     0  100M   0% /run/user
    /dev/md0                 734M   37M  644M   6% /boot
    

    You can see here that the root file system ‘/’ is from the device ‘/dev/mapper/vg0-lv_root’. Which was the volume group ‘vg0’ and the logical volume ‘lv_root’.

    ‘/boot’ is from the device ‘/dev/md0’.

    To check the swap.

    swapon -s
    
    Filename				Type		Size	Used	Priority
    /dev/md2                                partition	1993660	0	-1
    

    You can see it is from the device /dev/md2 and is about 2GB.

    If you are quick enough or your disks are bigger than 25GB you may even catch the RAID1 system rebuilding. The output below does not show this I was too slow or the host I use for the virtuals is very quick. 🙂

    cat /proc/mdstat
    
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md1 : active raid1 sda2[0] sdb2[1]
          23420800 blocks super 1.2 [2/2] [UU]
          [=======>.............]  recovery = 38.6% (9057024/23420800) finish=1.4min speed=168128K/sec
    
    md0 : active raid1 sda1[0] sdb1[1]
          779712 blocks super 1.2 [2/2] [UU]
          
    md2 : active raid1 sda3[0] sdb3[1]
          1993664 blocks super 1.2 [2/2] [UU]
          
    unused devices: 
    

    If you want to watch the array rebuild itself use the following command

    watch -n1 cat /proc/mdstat
    

    Taking md1 above as an example the things to lookout for are that we can see two disks sda1[0] and sdb2[1] are listed. the number in brackets is the order of the disks in the array. On the second line we see ‘[2/2]’. This means there should be 2 twos in the array and there are 2 actually in it. The [UU] is the same thing again for windows uses, pretty but giving no real usable information.

    To view the status of one or more arrays:

    sudo mdadm -D /dev/md0
    
    /dev/md0:
            Version : 1.2
      Creation Time : Mon Apr  6 13:50:03 2015
         Raid Level : raid1
         Array Size : 779712 (761.57 MiB 798.43 MB)
      Used Dev Size : 779712 (761.57 MiB 798.43 MB)
       Raid Devices : 2
      Total Devices : 2
        Persistence : Superblock is persistent
    
        Update Time : Mon Apr  6 14:41:15 2015
              State : clean 
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0
    
               Name : raid-test:0  (local to host raid-test)
               UUID : 90cd04d4:b6754cba:52e2b88d:8749942c
             Events : 17
    
        Number   Major   Minor   RaidDevice State
           0       8        1        0      active sync   /dev/sda1
           1       8       17        1      active sync   /dev/sdb1
    

    To check the status of a disk in an array

    sudo mdadm -E /dev/sda1
    
    /dev/sda1:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 90cd04d4:b6754cba:52e2b88d:8749942c
               Name : raid-test:0  (local to host raid-test)
      Creation Time : Mon Apr  6 13:50:03 2015
         Raid Level : raid1
       Raid Devices : 2
    
     Avail Dev Size : 1559552 (761.63 MiB 798.49 MB)
         Array Size : 779712 (761.57 MiB 798.43 MB)
      Used Dev Size : 1559424 (761.57 MiB 798.43 MB)
        Data Offset : 1024 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 5934ad36:e16b82ac:e0eb32c1:202d3ec8
    
        Update Time : Mon Apr  6 14:41:15 2015
           Checksum : e0a03142 - correct
             Events : 17
    
    
       Device Role : Active device 0
       Array State : AA ('A' == active, '.' == missing)
    

    That’s it. a little long winded maybe be really not that hard, the second time.

    Oh wait, what happens WHEN a disk fails, that is why we wanted to use RAID!?

    See my next post in the series, Recovering from a RAID1 disk failure on Ubuntu 14.04.

10 thoughts on “Installing Ubuntu 14.04 on RAID 1 and LVM

  1. Omar Cornejo

    I am sorry to bother you, but I have found your website very helpful and I was able to use it to configure a system.
    I am currently running into a problem following the same recipe. When I tried to set the raid system and I create the first partition in your example, I have the problem that it does not allow me to set the physical volume for RAID as bootable. When I try to change the bootable flag to “on”, the system does not allow the change. Would you know a workaround this problem?

    thanks,
    Omar

    Reply
    1. Richard Post author

      I just ran through the posting again, using Ubuntu 14.04 as create the boot partition you select is as a physical volume for RAID. Then just highlight the Bootable flag option and it toggles on and off. I have not tried with later versions which may have changed.

      Reply
    1. Richard Post author

      I am glad the post helped and you worked it out. The screens change over time but the basic principles stay the same.

      Reply
  2. Chris Smith

    Thank-you for these excellent instructions, I have used them to Install Debian 8 in RAID 1 on my server.

    Reply
  3. MAthew

    Hi,
    I have install Debian 8.7.1 with Your tutorial. When he finish, he ask me – when install grub. What I do put them ? /dev/sda ? /dev/sdb ? or manual ?
    When I put /dev/sda and next disconnect /dev/sda – system not boot.
    So, what is wrong ?

    Reply
    1. Richard Post author

      This is an old tutorial and the installer has moved on. It sounds like you are booting from a RAID array. You have installed grub to /dev/sda, which is only one half of the array. Grub installs its files into the MBR which in not part of the RAID array. So you also need to install it to /dev/sdb. Then when your array fails and you remove one half it can still boot. Hope that helps.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *