On 13 Dec 10:41, James Bensley wrote:
On 12 December 2011 11:10, steve-ALUG@hst.me.uk wrote:
Recently there's been an error on 1 of the disks. After a reboot, the raid array resynchronised itself, but I'm taking this as a warning and an excuse to replace the disks.
You could check the health of the disk and run some tests with smartmontools (apt-get install smartmontools), if you disk(s) support S.M.A.R.T features.
I have several questions. First of which is, is MDADM still the way to go? I seem to remember reading that you can set up RAID 1 with just using ext4. Presumably MADAM is still the best.
What do you mean, just using ext4? That is a file system as far as I know, have a RAID 1 requires a virtual disk volume the spans to physical disks. A file system like ext4 is then implemented on top of this virtual disk.
Quite correct, ext4 doesn't do mirroring at all... however other filesystems are available that do, e.g. zfs and btrfs (if btrfs was slightly more tested and stable, I'd be happily running everything through it... and if zfs wasn't only supported through a fuse driver, I'd look more at that...).
If you're just doing mirroring, you can also just use LVM directly to do the mirror (at the end of the day, everything still goes through the same kernel path AFAICT, i.e. it all hits device-mapper in the background that does the "hard" work).
Next, my original partitioning scheme was one / (root) raid device with partition, and one swap raid device swap with partition.
I presume that a better layout would be to have / (root), home and swap raid devices with partitions.
Well I think its a matter of opinion and depends on what you get up to on your machine. Some people also have separate /var and /usr.
I currently have...
1 primary partition mounted as /boot
Then everything else is LVM'd, including my swap partition, so I have: / /usr /var /home
All as seperate filesystems. And an extra lvm partition for storing virtual machines on, mounted at: /var/lib/libvirt/images
Which is where virt-manager wants to put virtual machine images.
Should I be using LVM as well, or just shove a partition directly on the raid device?
I don't want to do a fresh install, I want to copy the data over from the old disk to the new ones. What's the best way of going about this? I can only have 2 drives connected at a time, but I could put the new disks into a spare computer and transfer files over the network if necessary.
<chop shop>
If you want to grow your available space you can replace one disk, synch it, replace the other, and synch that. Now with two larger disks in synch, grow the file systems to fill out the available space (however I suggest using an LVM for this, although maybe this is no longer a problem in ext4, I did this once with ext3 and had some problems although it was a one off so I may well have done it wrong?).
MDADM can do all this, add drives, remove them, rebuild them, force a degraded state etc. LVM gives more flexibility (IMO) with stuff like growing the FS and moving it around.
I tend to LVM everything as far as possible these days... the filesystems above are actually on my work laptop...
The main reason for doing things like that is that if I wanted to snapshot various bits and pieces, I can take a whole filesystem and move it about using lvm (pvmove is a lovely lovely thing).
Just my wassbob.