cat /proc/mdstat gives me:
md1 : active raid5 sdi1[4](S) sdh1[2](F) sdg1[1] sdf1[0]
5860145664 blocks super 1.2 level 5, 512k chunk, algorithm 2
[4/2] [UU__]
The history of this array is that it was a functioning RAID5 array, that
wasn't really being used. I pulled one of the disks out for other uses
temporarily, then wiped it and returned it, the idea being that it was
returned to the array as a "new" disk to replace the "failed" (removed)
one, albeit that it was actually the same disk.
This is the RAID array that's been causing problems via a USB caddy (I now
have a cheap SATA card that seems to work reliably).
I was able to mount the array (albeit degraded because it was missing a
disk) with the disk removed. Now I can't (makes sense, 2-out-of-4 disks in
a RAID5 array isn't going to work very well!).
My reading of mdstat is that sdi1 is currently "spare" (I think that's the
one that came out and went back in), and that sdh1 has failed. Seems like
quite a coincidence.
As I said, the data on the array wasn't important, I'm using this as an
opportunity to learn more about what's going on because next time the data
may well matter.
Mark
--
Mark Rogers // More Solutions Ltd (Peterborough Office) // 0844 251 1450
Registered in England (0456 0902) @ 13 Clarke Rd, Milton Keynes, MK1 1LG