I want to set up a server using software RAID1 as follows: - Install single disk, do the OS install (Ubuntu server 11.04), creating RAID array (but degraded as only one disk) - Boot the system and check it all works - Install second disk and add to array
The install works fine (I create an md partition and an array of 2 devices but only add one to it), and install grub to /dev/sda1 as usual.
However it won't boot. Grub starts and gives me my menu, but gets no further.
Suggestions? Am I doing this all wrong?
(Reason for doing this: migrating an existing RAID1 array using the onboard RAID to an mdraid array; I want to keep one of the original disks to recover some files off once the system is up before wiping it and adding it to the new array. But also I want to confirm that my new array will boot with only one disk otherwise there's not a lot of point to it!)
On 13/07/11 11:23, Mark Rogers wrote:
However it won't boot. Grub starts and gives me my menu, but gets no further.
I think this is "just" a grub issue.
After reading some forum advice I decided that booting from a single 1TB partition (actually just a bit less) may be an issue, although I think that was a red herring. But anyway, I'm now re-installed with /dev/sda1 = 500MB md /dev/sda2 = (almost) 1TB md /dev/sda3 = 1GB swap
/dev/md/0 = 500MB ext3 /boot /dev/md/1 = 1TB ext4 /
The symptoms are the same: no boot past the grub menu. If I try recovery mode it dies after "loading initial ramdisk".
If I enter the grub commandline ("c" from boot menu) and try: set root=(md/0) linux /vmlinuz<tab> root=/dev/md/1 initrd /initrd<tab> boot .. the server will now start booting but I get mounting /dev/md/1 on /root faile: Device or resource busy ... Target filesystem doesn't have requested /sbin/init No init found. Try passing init= bootarg
.. and I get dropped at an (initramfs) prompt.
Out of my depth here - suggestions?
On 13 Jul 12:52, Mark Rogers wrote:
On 13/07/11 11:23, Mark Rogers wrote:
However it won't boot. Grub starts and gives me my menu, but gets no further.
I think this is "just" a grub issue.
After reading some forum advice I decided that booting from a single 1TB partition (actually just a bit less) may be an issue, although I think that was a red herring. But anyway, I'm now re-installed with /dev/sda1 = 500MB md /dev/sda2 = (almost) 1TB md /dev/sda3 = 1GB swap
/dev/md/0 = 500MB ext3 /boot /dev/md/1 = 1TB ext4 /
The symptoms are the same: no boot past the grub menu. If I try recovery mode it dies after "loading initial ramdisk".
If I enter the grub commandline ("c" from boot menu) and try: set root=(md/0) linux /vmlinuz<tab> root=/dev/md/1
Try root=/dev/md1 rather than /dev/md/1 - devfs has been dead for long time now, and initramfs tends to use udev these days. So unless you're specifically enabling devfs, that's not going to be the device node (I hope!)
initrd /initrd<tab> boot
.. the server will now start booting but I get mounting /dev/md/1 on /root faile: Device or resource busy ... Target filesystem doesn't have requested /sbin/init No init found. Try passing init= bootarg
.. and I get dropped at an (initramfs) prompt.
Out of my depth here - suggestions?
-- Mark Rogers // More Solutions Ltd (Peterborough Office) // 0844 251 1450 Registered in England (0456 0902) 21 Drakes Mews, Milton Keynes, MK8 0ER
main@lists.alug.org.uk http://www.alug.org.uk/ http://lists.alug.org.uk/mailman/listinfo/main Unsubscribe? See message headers or the web site above!
On 13/07/11 13:35, Brett Parker wrote:
Try root=/dev/md1 rather than /dev/md/1 - devfs has been dead for long time now, and initramfs tends to use udev these days. So unless you're specifically enabling devfs, that's not going to be the device node (I hope!)
Before you sent this I'd decided to return to a single root partition (ie I only have one RAID array, md0, and /boot is a directory on it, and it's ext4).
So I tried the same method as I tried before but using set root=(md/0) linux /vmlinuz<tab> root=/dev/md/0 .. with no change apart from errors referencing md/0 instead of md/1.
So I just now tried your suggestion of switching to (now) /dev/md0, and now the errors refer to md0 instead of md/0 or md/1 but I'm otherwise no further forward.
I have noticed when manually mount /dev/md0 /root from the initramfs prompt that it first tries mounting as ext3, which fails, so it tries ext2, which fails, before trying ext4, which works. Is that relevant?
On 13 Jul 14:03, Mark Rogers wrote:
On 13/07/11 13:35, Brett Parker wrote:
Try root=/dev/md1 rather than /dev/md/1 - devfs has been dead for long time now, and initramfs tends to use udev these days. So unless you're specifically enabling devfs, that's not going to be the device node (I hope!)
Before you sent this I'd decided to return to a single root partition (ie I only have one RAID array, md0, and /boot is a directory on it, and it's ext4).
So I tried the same method as I tried before but using set root=(md/0) linux /vmlinuz<tab> root=/dev/md/0 .. with no change apart from errors referencing md/0 instead of md/1.
So I just now tried your suggestion of switching to (now) /dev/md0, and now the errors refer to md0 instead of md/0 or md/1 but I'm otherwise no further forward.
I have noticed when manually mount /dev/md0 /root from the initramfs prompt that it first tries mounting as ext3, which fails, so it tries ext2, which fails, before trying ext4, which works. Is that relevant?
Quite possibly, what's the line in /etc/fstab for it? Is it set to "auto"? Cos if it is, change it to ext4, and see what happens :)
On 13/07/11 14:57, Brett Parker wrote:
Quite possibly, what's the line in /etc/fstab for it? Is it set to "auto"? Cos if it is, change it to ext4, and see what happens :)
This is where I get confused. Well one of the places, anyway :-)
At this stage in the process is fstab relevant? /etc/fstab doesn't exist, although if the mount had worked it would have been at /root/etc/fstab (but if it's needed for the mount to work then surely it would have been in /etc already?).
To cut the process short, I decided to wipe the second disk and put that in and re-install, as much as anything to see whether it was the degraded array causing the problem - which it appears it was; with two disks, md0 has mounted fine as ext4 and booted. So my guess is that the ext4 thing was a(nother) red herring.
If I get chance I might see whether it'll now boot with just the one disk, although I didn't really have the day spare that I've just spent trying to get this up and running so I have a bit of catching up to do now!