Difference between revisions of "Grow Raid"
Jondecker76 (Talk | contribs) (Initial writeup) |
Wierdbeard65 (Talk | contribs) m |
||
(3 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
− | + | {{versioninfo}} | |
+ | [[Category: RAID Controller]] | ||
== Growing/Expanding LMCE Software Raid == | == Growing/Expanding LMCE Software Raid == | ||
Here is the method that I used to expand my 3-Disc software Raid 5 to a 4-disc software Raid 5. This worked for me, however, any time that you mess with a file system, you stand a chance of losing data! Even more so, performing any kind of file system operation on a mounted drive is very dangerous. Please study up on these steps carefully and understand the risks involved! | Here is the method that I used to expand my 3-Disc software Raid 5 to a 4-disc software Raid 5. This worked for me, however, any time that you mess with a file system, you stand a chance of losing data! Even more so, performing any kind of file system operation on a mounted drive is very dangerous. Please study up on these steps carefully and understand the risks involved! | ||
− | 1) The first thing I did was to go the the web admin, Advanced->Configuration->Raid. This page gives you a lot of useful information. The information I was most interested in was the "Block Device" for the RAID array that I wanted to grow. Mine was /dev/md0. '''PLEASE NOTE''' that anywhere in these instructions that I refer to /dev/md0 - that you will need to replace it with your own block device! (your, for instance, may be /dev/md1). You will need to know yours for the following steps! | + | '''1)''' The first thing I did was to go the the web admin, Advanced->Configuration->Raid. This page gives you a lot of useful information. The information I was most interested in was the "Block Device" for the RAID array that I wanted to grow. Mine was /dev/md0. '''PLEASE NOTE''' that anywhere in these instructions that I refer to /dev/md0 - that you will need to replace it with your own block device! (your, for instance, may be /dev/md1). You will need to know yours for the following steps! |
− | 2) I powered off my core (even though SATA is supposed to support hot-plugging), and installed the new drive. | + | |
− | 3) I started the core back up and went back to the web admin, Advanced->Configuration->Raid. | + | |
− | 4) Under the "Action" column for the raid device that you want to grow, click on the "Devices" button. | + | '''2)''' I powered off my core (even though SATA is supposed to support hot-plugging), and installed the new drive. |
− | 5) At the bottom of the page, select the new drive from the "Add Drive" dowpdown. Make sure the checkbox for "as spare disk" is checked. | + | |
− | 6) After a few moments you should see that the disk was added as a spare. | + | |
− | 7) Now reboot your core. When the grub menu comes up, hit escape, and chose the recovery mode. It is important to do the following steps while in recovery mode to reduce the chance of data loss! | + | '''3)''' I started the core back up and went back to the web admin, Advanced->Configuration->Raid. |
− | 8) First, lets grow the raid to 4 disks (in my case). I did this by issuing the command: | + | |
+ | |||
+ | '''4)''' Under the "Action" column for the raid device that you want to grow, click on the "Devices" button. | ||
+ | |||
+ | |||
+ | '''5)''' At the bottom of the page, select the new drive from the "Add Drive" dowpdown. Make sure the checkbox for "as spare disk" is checked. | ||
+ | |||
+ | |||
+ | '''6)''' After a few moments you should see that the disk was added as a spare. | ||
+ | |||
+ | |||
+ | '''7)''' Now reboot your core. When the grub menu comes up, hit escape, and chose the recovery mode. It is important to do the following steps while in recovery mode to reduce the chance of data loss! | ||
+ | |||
+ | |||
+ | '''8)''' First, lets grow the raid to 4 disks (in my case). I did this by issuing the command: | ||
mdadm --grow /dev/md0 --raid-disks=4 | mdadm --grow /dev/md0 --raid-disks=4 | ||
again, keep in mind that you may have a different /dev/md#, and of course I used --raid-disks=4 because i was expanding to 4 disks. Use the appropriate number for your situation | again, keep in mind that you may have a different /dev/md#, and of course I used --raid-disks=4 because i was expanding to 4 disks. Use the appropriate number for your situation | ||
− | 9) Be prepared for a looooong wait! (my wait was about 21 hours with 4 | + | |
+ | |||
+ | '''9)''' Be prepared for a looooong wait! (my wait was about 21 hours with 4 x 1TB drives). At any time you can see the status of the raid rebuilding itsself by typing: | ||
cat/proc/mdstat | cat/proc/mdstat | ||
it will give you the percentage done as well as an estimated time until finished. | it will give you the percentage done as well as an estimated time until finished. | ||
− | 10) Once the rebuild process is finished, you should have a 4-drive array! However, the filesystem still needs resized! we will deal with that in the following steps. | + | |
− | 11) The next step is to run a filesystem check. This is also a good time to unmount the raid array! THis part should only take 1-3 hours... | + | |
+ | '''10)''' Once the rebuild process is finished, you should have a 4-drive array! However, the filesystem still needs resized! we will deal with that in the following steps. | ||
+ | |||
+ | |||
+ | '''11)''' The next step is to run a filesystem check. This is also a good time to unmount the raid array! THis part should only take 1-3 hours... | ||
umount /dev/md0 | umount /dev/md0 | ||
e2fsck -f /dev/md0 | e2fsck -f /dev/md0 | ||
− | 12) Hopefully that will have reported no errors.. You will have to do some research if you got any errors, otherwise, continue to the last step to resize the partition. | + | |
− | 13) Again, it is important to have the Raid array unmounted! If you have rebooted since the last step, i highly recommend unmounting it again. | + | |
− | 14) Resize the partition to the entire size of the array. This shouldn't take too long... | + | '''12)''' Hopefully that will have reported no errors.. You will have to do some research if you got any errors, otherwise, continue to the last step to resize the partition. |
+ | |||
+ | |||
+ | '''13)''' Again, it is important to have the Raid array unmounted! If you have rebooted since the last step, i highly recommend unmounting it again. | ||
+ | |||
+ | |||
+ | '''14)''' Resize the partition to the entire size of the array. This shouldn't take too long... | ||
resize2fs /dev/md0 | resize2fs /dev/md0 |
Latest revision as of 11:01, 6 May 2010
Version | Status | Date Updated | Updated By |
---|---|---|---|
710 | Unknown | N/A | N/A |
810 | Unknown | N/A | N/A |
1004 | Unknown | N/A | N/A |
1204 | Unknown | N/A | N/A |
1404 | Unknown | N/A | N/A |
Usage Information |
Growing/Expanding LMCE Software Raid
Here is the method that I used to expand my 3-Disc software Raid 5 to a 4-disc software Raid 5. This worked for me, however, any time that you mess with a file system, you stand a chance of losing data! Even more so, performing any kind of file system operation on a mounted drive is very dangerous. Please study up on these steps carefully and understand the risks involved!
1) The first thing I did was to go the the web admin, Advanced->Configuration->Raid. This page gives you a lot of useful information. The information I was most interested in was the "Block Device" for the RAID array that I wanted to grow. Mine was /dev/md0. PLEASE NOTE that anywhere in these instructions that I refer to /dev/md0 - that you will need to replace it with your own block device! (your, for instance, may be /dev/md1). You will need to know yours for the following steps!
2) I powered off my core (even though SATA is supposed to support hot-plugging), and installed the new drive.
3) I started the core back up and went back to the web admin, Advanced->Configuration->Raid.
4) Under the "Action" column for the raid device that you want to grow, click on the "Devices" button.
5) At the bottom of the page, select the new drive from the "Add Drive" dowpdown. Make sure the checkbox for "as spare disk" is checked.
6) After a few moments you should see that the disk was added as a spare.
7) Now reboot your core. When the grub menu comes up, hit escape, and chose the recovery mode. It is important to do the following steps while in recovery mode to reduce the chance of data loss!
8) First, lets grow the raid to 4 disks (in my case). I did this by issuing the command:
mdadm --grow /dev/md0 --raid-disks=4
again, keep in mind that you may have a different /dev/md#, and of course I used --raid-disks=4 because i was expanding to 4 disks. Use the appropriate number for your situation
9) Be prepared for a looooong wait! (my wait was about 21 hours with 4 x 1TB drives). At any time you can see the status of the raid rebuilding itsself by typing:
cat/proc/mdstat
it will give you the percentage done as well as an estimated time until finished.
10) Once the rebuild process is finished, you should have a 4-drive array! However, the filesystem still needs resized! we will deal with that in the following steps.
11) The next step is to run a filesystem check. This is also a good time to unmount the raid array! THis part should only take 1-3 hours...
umount /dev/md0 e2fsck -f /dev/md0
12) Hopefully that will have reported no errors.. You will have to do some research if you got any errors, otherwise, continue to the last step to resize the partition.
13) Again, it is important to have the Raid array unmounted! If you have rebooted since the last step, i highly recommend unmounting it again.
14) Resize the partition to the entire size of the array. This shouldn't take too long...
resize2fs /dev/md0