On 22 October 2014 23:53, Wayne Stallwood ALUGlist@digimatic.co.uk wrote:
Some of the stats look a bit suspect, Unless you have your room very cold :) 17C sounds very low.
Yes well... It was cold outside, I had the windows open all day, heating was not on while I was at work, case side panels were removed, there's a 20cm fan blowing air onto the drive and system uptime was about 3 minutes :) I'm pretty sure I've seen it in the mid 20 degrees range before.
If the last two long tests failed then that's an internal firmware test and if it fails it isn't due to any smartmontools incompatibility..if the test fails the drive is toast and that result alone is enough to get it RMA'ed if it is still under warranty.
Sadly I don't think it's in warranty. I'll check the manufacture dates on the drive. Also sadly that drive may have come from one of those external WD Elements retail products, and I removed the drive and put in the desktop. I doubt that it would be covered by warranty in that situation.
Also if any of the following stats are telling the truth
197 Current_Pending_Sector 0x0032 198 198 000 Old_age Always
355
198 Offline_Uncorrectable 0x0030 199 199 000 Old_age Offline
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always170
1
Then the drive is dying. Overall Smart health is a pretty high threshold it usually only trips just before the drive dies completely in my experience.
Ah, I had no idea it operated like that. I thought as soon as one of the values was bad it would report an overall status of "WARNING" or similar, instead of FAIL or PASSED.
I think the stats above mean something like:-
The offline uncorrectable errors mean that checksums were shot as well so the sectors couldn't be recovered and reallocated to the spare space.
So that's 170 sectors of 512 bytes of lost data? I had a peek in lost+found/ using sudo sunglasses, but saw nothing.
Either way you have lost data there which explains the fsck results.
I think all the fsck errors were regarding ext4 group blocks. I don't know what the implications are for file loss though. Never mentioned inodes having errors.
My rule is that if the first two stats there are showing counts higher than 0 then I need to ditch the drive asap. I have tolerated a low count on the reallocated sectors before but only if it isn't moving, and generally I would have a bit less faith in the drive.
Thanks for the advice - I'll keep an eye on this in the future. Do you also have some sort of periodic smart monitoring setup so that it alerts you if the drive's going bad?
Your load cycle count is also pretty high (though well within manufacturer life predictions to the data sheet) This was a problem for the WD green drives. load cycle is the number of times the heads do a takeoff from the "landing strip" on the drive surface and it happens during spin up..so divided by the number of operating hours that is too many spin ups which can result in the drive wearing out much faster.
Yeah... I'm farly sure I ran the special tool idle3-tools to lower the frequency of the head park. I've got a replacement drive, and that's also a WD Green, so I'll make sure I run the idle3-tools on it and record values somewhere safe (including smartctl output for it).
Thanks! Srdjan