Hi Gnu Grub Gurus!
I've just spent an unpleasant evening getting my server to boot again.
Ubuntu Server 10.04 LTS, fully patched and up-to-date. Running Raid 1 with 2 active disks.
Historically, got a error reported on 1 of the disks, so brought some bigger ones. Marked one of the old disks as "Failed" and added one of the new ones. Waited for it to synch. Marked the other old disk as "Failed", added the other new one. Waited for it to synch. That left me with 4 disks in the raid array, 2 new ones active, the old two marked as failed, but still connected.
Was meaning to reboot this when I had time to fix any problems that may occur on rebooting. Tonight, I forgot that and rebooted the machine after something crashed.
Then the machine wouldn't boot. Got all the bios screens, disks recognised etc, then I got an error along the lines of udevd-work mdadm --incremental /dev/sdd1 unexpected exit with status 0x000b
Managed to boot from a live cd. Hacked around a bit, discovered I hadn't marked the new disks as bootable, and the partition type was wrong - Raid, as opposed to Raid-autodetect. Changed that disconnected the old disks, rebooted, and after the bios screens, where I would hope to see the grub boot menu, I just got a flashing cursor.
This is the problem I want to solve - how to get it to boot with just the new disks in it.
I shut it down again, reattached the old disks. Rebooted. It now has 4 disks, 2 new ones active, 2 old ones as spares (not synched).
Could my problem be that Grub is not installed on the new disks. If that's the case, how can I check. How can I fix it if this is the problem. NB I'm using grub legacy (V0.97)
If that's not my problem, what is my problem and how do I fix it? Something like what's listed here? http://ubuntuforums.org/showthread.php?t=1360445&highlight=grub2+fakerai... http://ubuntuforums.org/showthread.php?t=1360445&highlight=grub2+fakeraid
Any help appreciated! Steve