Hi Guys & Gals,
I have a server running Ubuntu Lucid 10.04. It has 2 sata disks running as RAID level 1 (simple mirror) using MDADM.
Recently there's been an error on 1 of the disks. After a reboot, the raid array resynchronised itself, but I'm taking this as a warning and an excuse to replace the disks.
I have several questions. First of which is, is MDADM still the way to go? I seem to remember reading that you can set up RAID 1 with just using ext4. Presumably MADAM is still the best.
Next, my original partitioning scheme was one / (root) raid device with partition, and one swap raid device swap with partition.
I presume that a better layout would be to have / (root), home and swap raid devices with partitions.
Should I be using LVM as well, or just shove a partition directly on the raid device?
I don't want to do a fresh install, I want to copy the data over from the old disk to the new ones. What's the best way of going about this? I can only have 2 drives connected at a time, but I could put the new disks into a spare computer and transfer files over the network if necessary.
I'm hoping that I may be able to get away with this: Using the appropriate raid controls, remove the "faulty" drive from the raid array, then physically remove it. Then physically mount one new, bigger drive. Create 3 raid partitions on the and format one as / (root) and one as swap, leaving the other one empty for a bit. Using the appropriate raid controls, mount the new devices and let them synch. Unmount and remove the remaining old drive. Mount, partition then add the new drive to the raid. Let it synch.
Once that's all happened, boot from a live cd, mount the devices, format the spare partition as "home", copy (move) files from the home directory to the new home partition.
Would this work? Or is there a better way?
Any comments appreciated. Steve
On 12 December 2011 11:10, steve-ALUG@hst.me.uk wrote:
Recently there's been an error on 1 of the disks. After a reboot, the raid array resynchronised itself, but I'm taking this as a warning and an excuse to replace the disks.
You could check the health of the disk and run some tests with smartmontools (apt-get install smartmontools), if you disk(s) support S.M.A.R.T features.
I have several questions. First of which is, is MDADM still the way to go? I seem to remember reading that you can set up RAID 1 with just using ext4. Presumably MADAM is still the best.
What do you mean, just using ext4? That is a file system as far as I know, have a RAID 1 requires a virtual disk volume the spans to physical disks. A file system like ext4 is then implemented on top of this virtual disk.
Next, my original partitioning scheme was one / (root) raid device with partition, and one swap raid device swap with partition.
I presume that a better layout would be to have / (root), home and swap raid devices with partitions.
Well I think its a matter of opinion and depends on what you get up to on your machine. Some people also have separate /var and /usr.
Should I be using LVM as well, or just shove a partition directly on the raid device?
I don't want to do a fresh install, I want to copy the data over from the old disk to the new ones. What's the best way of going about this? I can only have 2 drives connected at a time, but I could put the new disks into a spare computer and transfer files over the network if necessary.
<chop shop>
If you want to grow your available space you can replace one disk, synch it, replace the other, and synch that. Now with two larger disks in synch, grow the file systems to fill out the available space (however I suggest using an LVM for this, although maybe this is no longer a problem in ext4, I did this once with ext3 and had some problems although it was a one off so I may well have done it wrong?).
MDADM can do all this, add drives, remove them, rebuild them, force a degraded state etc. LVM gives more flexibility (IMO) with stuff like growing the FS and moving it around.
That's my two pence.
On 13 Dec 10:41, James Bensley wrote:
On 12 December 2011 11:10, steve-ALUG@hst.me.uk wrote:
Recently there's been an error on 1 of the disks. After a reboot, the raid array resynchronised itself, but I'm taking this as a warning and an excuse to replace the disks.
You could check the health of the disk and run some tests with smartmontools (apt-get install smartmontools), if you disk(s) support S.M.A.R.T features.
I have several questions. First of which is, is MDADM still the way to go? I seem to remember reading that you can set up RAID 1 with just using ext4. Presumably MADAM is still the best.
What do you mean, just using ext4? That is a file system as far as I know, have a RAID 1 requires a virtual disk volume the spans to physical disks. A file system like ext4 is then implemented on top of this virtual disk.
Quite correct, ext4 doesn't do mirroring at all... however other filesystems are available that do, e.g. zfs and btrfs (if btrfs was slightly more tested and stable, I'd be happily running everything through it... and if zfs wasn't only supported through a fuse driver, I'd look more at that...).
If you're just doing mirroring, you can also just use LVM directly to do the mirror (at the end of the day, everything still goes through the same kernel path AFAICT, i.e. it all hits device-mapper in the background that does the "hard" work).
Next, my original partitioning scheme was one / (root) raid device with partition, and one swap raid device swap with partition.
I presume that a better layout would be to have / (root), home and swap raid devices with partitions.
Well I think its a matter of opinion and depends on what you get up to on your machine. Some people also have separate /var and /usr.
I currently have...
1 primary partition mounted as /boot
Then everything else is LVM'd, including my swap partition, so I have: / /usr /var /home
All as seperate filesystems. And an extra lvm partition for storing virtual machines on, mounted at: /var/lib/libvirt/images
Which is where virt-manager wants to put virtual machine images.
Should I be using LVM as well, or just shove a partition directly on the raid device?
I don't want to do a fresh install, I want to copy the data over from the old disk to the new ones. What's the best way of going about this? I can only have 2 drives connected at a time, but I could put the new disks into a spare computer and transfer files over the network if necessary.
<chop shop>
If you want to grow your available space you can replace one disk, synch it, replace the other, and synch that. Now with two larger disks in synch, grow the file systems to fill out the available space (however I suggest using an LVM for this, although maybe this is no longer a problem in ext4, I did this once with ext3 and had some problems although it was a one off so I may well have done it wrong?).
MDADM can do all this, add drives, remove them, rebuild them, force a degraded state etc. LVM gives more flexibility (IMO) with stuff like growing the FS and moving it around.
I tend to LVM everything as far as possible these days... the filesystems above are actually on my work laptop...
The main reason for doing things like that is that if I wanted to snapshot various bits and pieces, I can take a whole filesystem and move it about using lvm (pvmove is a lovely lovely thing).
Just my wassbob.
On 13/12/11 11:22, Brett Parker wrote: [Big Snip]
I tend to LVM everything as far as possible these days... the filesystems above are actually on my work laptop...
The main reason for doing things like that is that if I wanted to snapshot various bits and pieces, I can take a whole filesystem and move it about using lvm (pvmove is a lovely lovely thing).
Just my wassbob.
Thanks for your input guys. I dunno where I got the idea about ext4 being able to do raid directly. Googling shows nothing.
Is there an easy way of installing LVM onto my revised setup, or does it involve creating new partitions, new LVM groups etc, then using some sort of attribute-preserving copy command? Or does it involve a fresh install (which I don't want to do)?
If it's involves an attribute-preserving copy command, any pointers of what it may be?
Cheers Steve