Chris: > Thanks for reminding me that the Old_age values don't always > necessarily start at 100.
Np.. > I apologise if anyone thinks I am being harsh, but I see a lot of hair > pulling about how drives are going to die in 6 months, with numbers that > are very hard to interpret (something I am clearly guilty of, because > there were some mistakes in my comment). Bah. I myself was rather grumpy and frustrated yesterday, so I should be the one to apologize. I know this is a very cloudy issue with *many* areas of potential misinterpretation -- I've posted things I've been corrected about as well (and even argued the incorrect point ;-). > It's also interesting to see VALUEs of 001 in ubuntu_demon's comment - I > find it extremely hard to believe that this is actually true. It's yet > more evidence of vendor specific SMART behaviour, which puts even more > doubt on the available data, especially since those posts don't appear > to be shortly followed by VALUEs of 000 with a FAILING_NOW tag. I have to agree about the potential oddness of a value of 001, with the reserve that it is still very possible that the numbers are correct. This may be a case where load_cycle_count's raw value may be of use -- check the values, wait for the click, check the values, and see if it has increased by one (in which case at least the raw value is probably correct). This can then be compared against the spec sheet. Of course, the preferred route would be, as you mentioned, download the manufacturer's utility and run it. -- High frequency of load/unload cycles on some hard disks may shorten lifetime https://bugs.launchpad.net/bugs/59695 You received this bug notification because you are a member of Ubuntu Bugs, which is the bug contact for Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs