Linux RAID-1 [1]

Well I have made a start on testing the RAID configuration for my main PC. After looking at various scenarios, the one that fits the closest is incorporated into the documentation at this site:

In addition the following was referenced for some of the initial setup steps:
In order to make a start I have already (at install time) set up two partitions of identical size (320 GB each). This was done using a 320GB and a 500 GB disk. The 500 GB disk was further partitioned to produce the swap partition and the full install partition (mount point /) for the Linux Mint install for this computer. Mint was installed and I am now working in the Mint installation to set up the RAID-1. The partitions are /dev/sda2 and /dev/sdb1.
Referring to the second reference document (linuxconfig.org) we work through section 3, install mdadm, then section 4.1, loading kernel modules. In other words we are having raid-1 and md modules loaded by adding lines for them to /etc/modules file. Then the system is rebooted to get the modules loaded. Doing a cat /proc/mdstat now reveals that we have RAID-1 support loaded.
We then go to the first reference document (raid.wiki.kernel.org) and go to section 1.5.3 to create a RAID-1 array. In this case the command looks like
sudo mdadm –create –verbose /dev/md0 –level=mirror –raid-devices=2 /dev/sda2 /dev/sdb1

The next step after the command completes is to look at /proc/mdstat again and read the output, which tells us that the raid-1 array has been created and gives an estimate of how long the initialisation will take to complete. In my case I have also opened the Disks applet from the Mint menu to take a look at how the new disk shows up there, and it appears categorised as a RAID array with all the right information and gives me a graphical representation of the syncing process. At this stage I elected to use the GUI to format the new drive and name it, and then was able to mount it even though the rebuild isn’t yet complete. It therefore appears as a new 320 GB volume in the filesystem, but doesn’t have any special standing because it hasn’t been mounted within the regular filesystem tree of the installation.

Once the initialisation is complete there is one more step and that is to save the RAID configuration. After switching into SU mode (by issuing the command su) we can run the following:

mdamd –detail –scan >> /etc/mdadm/mdadm.conf

The reboot that followed proved that the volume was recognised again on startup. 

Now that we have our RAID running, the next issue to be tackled is how to relocate /home. In the case of my real world system, this relocation will have to occur twice. Firstly, /home will have to go temporarily to another disk, and secondly, after creating the RAID-1 array, /home then gets moved back on the new RAID volume. Here is some documentation I have looked at so far:
For the test I will put some data in place to be copied across from the existing location within the install partition.
For the real thing I need to undertake some steps as follows:
  • Run a full backup, probably not using the Windows based backup but a native Linux one on a removable disk.
  • Find a disk from somewhere to be the temporary home of /home and put it into the removable drive bay of the main PC.
  • Move /home from the internal drive to the removable drive
  • Set up the RAID-1 array using the two internal disks.
  • Move /home from the removable drive to the new RAID-1 array.
So that will gradually happen over the next few weeks, depending on how long it takes to run my test case on the demo machine.

by

Tags: