One of our servers did a strange thing today.
Suddenly at about half 11 this morning it remounted / as read only, I have seen filesystems do this on boot when fsck was unable to repair something automatically but this happened with an uptime of 80 days. Nothing useful was written to the logs (presumably because they are on the same mount point and hence also read only)
Seeing as it was in a ro state anyway I ran fsck on the ext3 filesystem and it came back with a large number (100's) of errors, some of which needed manual confirmation before they could be fixed.
However / being ro had made a mess of some of the processes that machine runs so I gave it a reboot and everything came back as expected and with an empty lost+found.
I ran smartctl on all 4 disks of the Raid 5 array and no errors have been logged and disks look healthy. The mdadm shows all 4 disks as being in normal mode and the array as being healthy.
I am going to run extended offline smart tests of the disks over the weekend when the machine is less busy, but in the meantime can somebody confirm that damaged filesystems remounting themselves as ro is normal behaviour and if so what detects and schedules this ?
Also any ideas as to what (apart from faulty RAM which again I am going to have to wait until the weekend to test) might have caused such widespread corruption in the first place ?