On 14 October 2013 13:04, mick mbm@rlogin.net wrote:
On Mon, 14 Oct 2013 09:16:17 +0100 Mark Rogers mark@quarella.co.uk allegedly wrote:
My interpretation of the 300,000 lifetime max was that this was an "expected" maximum, ie it would be predicted that a disk would reach this level in normal usage over its lifetime. In designing a disk to spin down more often it should be expected to have a higher load cycle count in normal use than a "normal" disk. The maximum design lifetime seems to be 1,000,000, so anything up to that shouldn't really give any cause for concern (in my reading of this),
But with a NAS that is always on, a load cycle count of nearly 390.00 in 6 months points to a disk lifetime of around 16 months. I was sort of hoping for about 3 years (which is about what I expect of a modern disk).
Indeed, this isn't good, however if you make the changes to the idle timer (which you have done) that should stop this getting substantially worse meaning that the life of the disk "should" be fine in its current usage. In other words, as long as you make the change then my personal opinion is that you caught it early enough not to need to replace the disk, although if budget is no issue then replacing one makes sense.
I am likely to standardise on RAID1 going forward because it makes it easier to routinely replace one disk periodically (eg every 18 months if you assume 3 year lifespan as you have suggested). You can still do that with RAID5 (eg replace one of my 4 disks every 9 months) but RAID1 is just so much simpler.
Mark