Linux-Fan <ma_sys...@web.de> writes:

>> On 09/20/2014 04:55 PM, lee wrote:
>
>> Other than that, in my experience Seagate disks my have an unusually
>> high failure rate.
>
> Mine all work here. SMART reports

They'll work until they fail.  I don't believe in the smart-info.

> The "unreliability" has just happened again and using the edited
> initscript it was really simple to solve. It said "... errors ... Press
> Ctrl-D or give root password ..." and I entered the password, typed
> reboot and it recognized the disks again. Cost: 45 sec per week or so.

You rebuild the RAID within 45 seconds?  And you realise that RAID has a
reputation to fail beyond recovery preferably during rebuilds?

You might be better off without this RAID and backups to the second disk
with rsync instead.

>> What's "a business class computer"?
>
> Any tower a company only offers when you go to the "Business" section on
> their respective website. (It is not really exactly defined -- another
> definition could be: "Any machine which does not have any shiny plastic
> parts" :) )

It doesn't mean anything then.

>> I've come to tend to buy used server class hardware whenever it's
>> suitable, based on the experience that the quality is much better than
>> otherwise, on the assumption that it'll be more reliable and because
>> there isn't any better for the money.  So far, performance is also
>> stunning.  This stuff is really a bargain.
>
> Sounds good. I also considered buying a server as my main system
> (instead of what HP calls a "Workstation") because it seemed to offer
> more HDD slots and the same computing power for a lower price but I was
> never sure how good real server hardware's compatibility with "normal"
> graphics cards is.

For a desktop, just pick a case and the components as it suits your
needs and put your own computer together.

Servers are usually not designed for graphics.  Mine has some integrated
card which is lousy --- and it's sufficient for a server.  I could
probably add some graphics card as long as it's PCIe and low profile (or
fits into the riser card) and doesn't need an extra power supply.

> Still, I am going to use the disks for now -- I can afford a bit of
> extra-maintenace time because I am always interested in getting the
> maximum out of the harware /available/

Running hardware on the edge of what it is capable of is generally a
recipe for failure.  You may be able to do it with hardware designed for
it, like server class hardware.

It's not what you're doing, though.  You're kinda testing out the limits
of USB connections and have found out that they are not up to your
requirements by far.

> (otherwise I should have gone with hardware RAID from the very
> beginning and I might be using RHEL, because they offer support and my
> system is certified to run a specific RHEL version, etc.).

Hardware RAID has its own advantages and disadvantages, and ZFS might be
a better choice.  Your system being specified for a particular version
of RHEL only helps you as long as this particular version is
sufficiently up to date --- and AFAIK you'd have to pay for the support.
You might be better off with Centos, if you don't mind systemd.

> On the other hand, I have learned my lesson and will not rely on USB
> disks for "permantently attached storage" again /in the future/.

USB isn't even suited for temporarily attached storage.


-- 
Knowledge is volatile and fluid.  Software is power.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/87r3z4ddfd....@yun.yagibdah.de

Reply via email to