Steve Fosdick wrote:
Upon choosing the "recovery mode" from GRUB I found that the NFS mount was not working because the network card was not up. It seems that when you use the helpful GNOME-based network configuration to configure a network connection it then does not get started early enough in the boot process to support the NFS mount which is rather different than if it had been configured in the /etc/network/interfaces file.
Yes it is worth remembering that connections set up by Gnome's network manager don't happen until after gdm starts. This has caught me out for other reasons as well. I think the only place for network manager is on roaming machines like laptops etc...otherwise you are far better off with a real config.
At this point I also found that one RAID array was failing to start which, after some investigation turned out to be a zeroed-out RAID superblock on one of the partitions which I will discuss in another thread.
I now do not know if the failing NFS mount was responsible for the hung boot process and the superblock got corrupted by the magic SysReq reboot or if the superblock was corrupted by a failing drive and the NFS mount was a red herring but it was certainly somewhat surprising.
Have you tried running smartmontools against the drive that dropped the superblock ?
smartctl -a /dev/sda7 , there are several guides that will tell you how to correctly interpret the results..but anything involving reallocated sectors, reallocated events, pending offline or offline uncorrectable is potentially bad.
Going one step further you might want to try a smartctl -t long /dev/sda7. then checking back after the reported finish time for the test results using -a again. Near the top (or bottom depending on the drive) there will be a log of tests with a progress percentage and a pass/fail. Don't worry about the man pages saying this is an offline test...the drive doesn't actually go offline...although you will notice a distinct lack of performance on the array.