Difference between revisions of "Grow Raid"

From LinuxMCE
Jump to: navigation, search
(Growing/Expanding LMCE Software Raid)
m
Line 1: Line 1:
 +
[[Category: RAID Controller]]
 
== Growing/Expanding LMCE Software Raid ==
 
== Growing/Expanding LMCE Software Raid ==
 
Here is the method that I used to expand my 3-Disc software Raid 5 to a 4-disc software Raid 5. This worked for me, however, any time that you mess with a file system, you stand a chance of losing data! Even more so, performing any kind of file system operation on a mounted drive is very dangerous. Please study up on these steps carefully and understand the risks involved!
 
Here is the method that I used to expand my 3-Disc software Raid 5 to a 4-disc software Raid 5. This worked for me, however, any time that you mess with a file system, you stand a chance of losing data! Even more so, performing any kind of file system operation on a mounted drive is very dangerous. Please study up on these steps carefully and understand the risks involved!

Revision as of 18:57, 23 August 2009

Growing/Expanding LMCE Software Raid

Here is the method that I used to expand my 3-Disc software Raid 5 to a 4-disc software Raid 5. This worked for me, however, any time that you mess with a file system, you stand a chance of losing data! Even more so, performing any kind of file system operation on a mounted drive is very dangerous. Please study up on these steps carefully and understand the risks involved!

1) The first thing I did was to go the the web admin, Advanced->Configuration->Raid. This page gives you a lot of useful information. The information I was most interested in was the "Block Device" for the RAID array that I wanted to grow. Mine was /dev/md0. PLEASE NOTE that anywhere in these instructions that I refer to /dev/md0 - that you will need to replace it with your own block device! (your, for instance, may be /dev/md1). You will need to know yours for the following steps!


2) I powered off my core (even though SATA is supposed to support hot-plugging), and installed the new drive.


3) I started the core back up and went back to the web admin, Advanced->Configuration->Raid.


4) Under the "Action" column for the raid device that you want to grow, click on the "Devices" button.


5) At the bottom of the page, select the new drive from the "Add Drive" dowpdown. Make sure the checkbox for "as spare disk" is checked.


6) After a few moments you should see that the disk was added as a spare.


7) Now reboot your core. When the grub menu comes up, hit escape, and chose the recovery mode. It is important to do the following steps while in recovery mode to reduce the chance of data loss!


8) First, lets grow the raid to 4 disks (in my case). I did this by issuing the command:

 mdadm --grow /dev/md0 --raid-disks=4

again, keep in mind that you may have a different /dev/md#, and of course I used --raid-disks=4 because i was expanding to 4 disks. Use the appropriate number for your situation


9) Be prepared for a looooong wait! (my wait was about 21 hours with 4 x 1TB drives). At any time you can see the status of the raid rebuilding itsself by typing:

cat/proc/mdstat

it will give you the percentage done as well as an estimated time until finished.


10) Once the rebuild process is finished, you should have a 4-drive array! However, the filesystem still needs resized! we will deal with that in the following steps.


11) The next step is to run a filesystem check. This is also a good time to unmount the raid array! THis part should only take 1-3 hours...

umount /dev/md0
e2fsck -f /dev/md0


12) Hopefully that will have reported no errors.. You will have to do some research if you got any errors, otherwise, continue to the last step to resize the partition.


13) Again, it is important to have the Raid array unmounted! If you have rebooted since the last step, i highly recommend unmounting it again.


14) Resize the partition to the entire size of the array. This shouldn't take too long...

resize2fs /dev/md0