Raid 1 sarge dicas
Migrating To RAID1 Mirror on Sarge
Posted by philcore on Thu 8 Sep 2005 at 21:03
A guide to migrating to RAID1 on a working Debian Sarge installation which was installed on a single drive.
I suggest reading the following links: Migrating to a mirrored raid using Grub, GRUB and RAID mini-HOWTO.
My setup:
/dev/sda == original drive with data /dev/sdb == new 2nd drive.
(It is assumed that you have RAID1 enabled in your kernel.)
First of all install md tools:
apt-get install mdadm
change the system types on partitions you want to mirror on the old drive to fd (raid autodetect) using [s]fdisk. Don't change the swap partition! Your finished drive should resemble this output:
[root@firefoot root]# sfdisk -l /dev/sda Disk /dev/sda: 8942 cylinders, 255 heads, 63 sectors/track Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/sda1 * 0+ 242 243- 1951866 fd Linux raid autodetect /dev/sda2 243 485 243 1951897+ fd Linux raid autodetect /dev/sda3 486 607 122 979965 82 Linux swap / Solaris /dev/sda4 608 8923 8316 66798270 5 Extended /dev/sda5 608+ 1823 1216- 9767488+ fd Linux raid autodetect /dev/sda6 1824+ 4255 2432- 19535008+ fd Linux raid autodetect /dev/sda7 4256+ 4377 122- 979933+ fd Linux raid autodetect /dev/sda8 4378+ 8923 4546- 36515713+ fd Linux raid autodetect
Now use sfdisk to duplicate partitions from old drive to new drive:
sfdisk -d /dev/sda | sfdisk /dev/sdb
Now use mdadm to create the raid arrays. We mark the first drive (sda) as "missing" so it doesn't wipe out our existing data:
mdadm --create /dev/md0 --level 1 --raid-devices=2 missing /dev/sdb1
Repeat for the remaining raid volumes md1,md2, etc....
mdadm --create /dev/md1 --level 1 --raid-devices=2 missing /dev/sdb2
Now that the volumes are ready create filesystems for the raid devices. My example shows using ext3, but pick the filesystem of your choice. Again, make sure you have kernel support for your selected filesystem.
mkfs.ext3 /dev/md0 mkfs.ext3 /dev/md1 etc...
Now mount the new raid volumes. I mount them under the /mnt directory:
mount /dev/md0 /mnt cp -dpRx / /mnt
Now copy the remaining partitions. Be careful to match your md devices with your filesystem layout. This example is for my particular setup.
mount /dev/md1 /mnt/var cp -dpRx /var /mnt mount /dev/md2 /mnt/usr cp -dpRx /usr /mnt/ mount /dev/md3 /mnt/home cp -dpRx /home /mnt mount /dev/md4 /mnt/tmp cp -dpRx /tmp /mnt mount /dev/md5 /mnt/data cp -dpRx /data /mnt
Format the swap partition on the new drive:
mkswap -v1 /dev/sdb3
Edit /mnt/etc/fstab and change to use the md devices, also note the pri=1 on both swap partitions. This should increase swap performance.
# /etc/fstab: static file system information. # proc /proc proc defaults 0 0 /dev/md0 / ext3 defaults,errors=remount-ro 0 1 /dev/md1 /var ext3 defaults 0 2 /dev/md2 /usr ext3 defaults 0 2 /dev/md3 /home xfs defaults 0 2 /dev/md4 /tmp ext3 defaults,noexec 0 2 /dev/md5 /data xfs defaults 0 2 /dev/sda3 none swap sw,pri=1 0 0 /dev/sdb3 none swap sw,pri=1 0 0 /dev/hda /media/cdrom0 iso9660 ro,user,noauto 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto 0 0
Now to setup the bootloader, edit /mnt/boot/grub/menu.lst and add an entry to boot using raid and a recovery mode in case the first drive fails.
title Custom Kernel 2.6.11.7 root (hd0,0) kernel /boot/vmlinuz-2.6.11.7 root=/dev/md0 md=0,/dev/sda1,/dev/sdb1 ro boot title Custom Kernel 2.6.11.7 (RAID Recovery) root (hd1,0) kernel /boot/vmlinuz-2.6.11.7 root=/dev/md0 md=0,/dev/sdb1 ro boot
Install grub on the second drive so if the first drive fails we can still boot.
grub-install /dev/sda grub grub: device (hd0) /dev/sdb grub: root (hd0,0) grub: setup (hd0) grub: quit
Copy the live GRUB configuration and fstab files to the old drive:
cp -dp /mnt/etc/fstab /etc/fstab cp -dp /mnt/boot/grub/menu.lst /boot/grub
Now is time to reboot and test things.
Once the system comes up, you should see the mounted md devices.
[root@firefoot root]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/md0 1921036 304552 1518900 17% / tmpfs 193064 4 193060 1% /dev/shm /dev/md1 1921100 206768 1616744 12% /var /dev/md2 9614052 2948620 6177064 33% /usr /dev/md3 19524672 741140 18783532 4% /home /dev/md4 964408 16448 898968 2% /tmp /dev/md5 36497820 6683308 29814512 19% /data
At this point, you have all of your original data on the new drive, so we can safely add the original drive to the raid volume.
mdadm --add /dev/md0 /dev/sda1 mdadm --add /dev/md1 /dev/sda2 ... repeat for remaining partitions.
Check /proc/mdstat for the skinny on what's done and what's not.. when everything is done, all the devices should show [UU]. Don't reboot until it's done synching the drives.
[root@firefoot root]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb1[1] sda1[0] 1951744 blocks [2/2] [UU] md1 : active raid1 sdb2[1] sda2[0] 1951808 blocks [2/2] [UU] md2 : active raid1 sdb5[1] sda5[0] 9767424 blocks [2/2] [UU] md3 : active raid1 sdb6[1] sda6[0] 19534912 blocks [2/2] [UU] md4 : active raid1 sdb7[1] sda7[0] 979840 blocks [2/2] [UU] md5 : active raid1 sdb8[1] sda8[0] 36515648 blocks [2/2] [UU]
This article can be found online at the Debian Administration website at the following bookmarkable URL (along with associated comments):
This article is copyright 2005 philcore - please ask for permission to republish or translate.