On 09/21/2014 08:41 PM, lee wrote: > Linux-Fan <ma_sys...@web.de> writes: >>> On 09/20/2014 04:55 PM, lee wrote: >>> Other than that, in my experience Seagate disks my have an unusually >>> high failure rate. >> >> Mine all work here. SMART reports > > They'll work until they fail. I don't believe in the smart-info.
I do not trust SMART to be a reliable means of failure-prevention either (the only failure I ever had occurred without any SMART warning), but the "counters" especially for such normal things like power-on hours or power-cycle counts are reliable as far as I can tell. Also, the drive is used and filled with my data which all seems to be readable and correct. >> The "unreliability" has just happened again and using the edited >> initscript it was really simple to solve. It said "... errors ... Press >> Ctrl-D or give root password ..." and I entered the password, typed >> reboot and it recognized the disks again. Cost: 45 sec per week or so. > > You rebuild the RAID within 45 seconds? And you realise that RAID has a > reputation to fail beyond recovery preferably during rebuilds? No, I did not rebuild because it is not necessary as the data has not changed and the RAID had not been assembled (degraded) yet. And the second statement was the very reason for me starting this thread. > You might be better off without this RAID and backups to the second disk > with rsync instead. Also a good idea, but I had a drive failure /once/ (the only SSD I ever bought) and although the system was backed up and restored, it still took five hours to restore it to a correctly working state. The failure itself was not the problem -- it was just, that it was completely unexpected. Now, I try to avoid this "unexpected" by using RAID. Even if it is unstable, i.e. fails earlier than a better approach which was already suggested, I will have a drive fail and /be able to take action/ before (all of) the data is lost. >> Still, I am going to use the disks for now -- I can afford a bit of >> extra-maintenace time because I am always interested in getting the >> maximum out of the harware /available/ > > Running hardware on the edge of what it is capable of is generally a > recipe for failure. You may be able to do it with hardware designed for > it, like server class hardware. > > It's not what you're doing, though. You're kinda testing out the limits > of USB connections and have found out that they are not up to your > requirements by far. The instability lays in a single point: Sometimes, upon system startup, the drive is not recognized. There has not been a single loss of connection while the system was running. The only problem with that instability was that it caused the RAID to need a rebuild as it came up degraded (because one drive was missing). And, as already mentioned, having to rebuild an array about once a week is a bad thing. Making the boot fail if the drive has not been recognized solved this issue: I can reboot manually and the RAID continues to work properly, because it behaves as if the failed boot had never occurred: Both drives are "there" again and therefore MDADM accepts this as a normally functioning RAID without rebuild. >> (otherwise I should have gone with hardware RAID from the very >> beginning and I might be using RHEL, because they offer support and my >> system is certified to run a specific RHEL version, etc.). > > Hardware RAID has its own advantages and disadvantages, and ZFS might be > a better choice. Your system being specified for a particular version > of RHEL only helps you as long as this particular version is > sufficiently up to date --- and AFAIK you'd have to pay for the support. > You might be better off with Centos, if you don't mind systemd. I do not /want/ to use RHEL (because otherwiese, I would indeed run CentOS), I only wanted to express that if I did not have any time for system maintenance, I would pay for the support and be done with all that "OS-stuff". Instead, I now run a system without (commercial/granted) support and therefore explicitly accept some maintenance by my own including the ability/necessity to spend some time on configuring an imperfect setup which includes USB disks. >> On the other hand, I have learned my lesson and will not rely on USB >> disks for "permantently attached storage" again /in the future/. > > USB isn't even suited for temporarily attached storage. If I had to backup a medium amount of data, I would (still) save it to an external USB HDD -- why is this such a bad idea? Sure, most admins recommend tapes, but reading/writing tapes on a desktop requires equipment about as expensive as a new computer. Also, my backup strategy always includes the simple question: "How would I access my data from any system?" "Any system" being thought of as the average Windows machine without any fancy devices to rely on. Linux-Fan -- http://masysma.lima-city.de/
signature.asc
Description: OpenPGP digital signature