On Mon, Jul 8, 2013 at 1:27 PM, Stefan G. Weichinger <li...@xunil.at> wrote:
> Does it make sense to apply some sort of burn-in-procedure before
> actually formatting and using the disks? Running badblocks or something?
>
> I ask because I wait for that shiny new server and doing so might not
> hurt before installing gentoo. Or is that too paranoid and a waste of time?

Initially I ran the SMART long test and it found no errors. Then I did
badblocks read-only scan and it found some bad sectors. After that,
SMART tests failed to complete due to "failure reading LBA xxxxxxxxx".
I used hdparm to remap those sectors, but didn't feel entirely
confident in the disk at that point in time.

So I ran the badblocks destructive read-write test and it completed
(after a couple days) with zero errors! How can it be?

Checking the SMART statistics afterward, I can see now there are
dozens of newly reallocated sectors. So that means the drive silently
replaced those bad sectors with spares, which is good! That is what it
is supposed to do! I don't feel happy about the fact that those bad
sectors exist in the first place, but the drive did what it was
designed to do when it encountered them.

After the r/w badblocks test cycle finished, I ran SMART long-scan
again and this time it completed with no errors.

So I recommend to do the destructive read-write badblocks test, if you
can afford the hours (or days) spent waiting for it to complete.

SMART alone did not detect the errors initially, but neither did
badblocks actually identify the errors during its write test (because
the drive hides it). But the combination of badblocks and the
self-repairing code in the drive's firmware accomplished the goal of
making my disk free of errors (logically).

Notes:

WARNING! Be careful to give the correct device name when doing the
badblocks write test! There is no confirmation prompt! It immediately
starts destroying data at the beginning of the disk.

If you have a disk with 4k sector size, be sure to tell badblocks to
use a 4096 byte block size. It uses 1k block size by default, which
can cause the test to be very slow! In my system badblocks with 1k
block size read at 15MB/sec, while 4k block size read at over
160MB/sec! Using 1k block size on a 4k-sector disk also causes all
errors to be reported 4 times each.

Good luck :)

Reply via email to